Welcome to the December 15, 2014 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Watson Wannabes: 4 Open Source Projects for Machine Intelligence
InfoWorld (12/15/14) Serdar Yegulalp
Four groups have been developing IBM Watson-like systems based on open source work. The U.S. Defense Advanced Research Projects Agency's DeepDive project is designed to emulate Watson's ability to improve its decision-making over time with human guidance. DeepDive, developed by University of Wisconsin-Madison professor Christopher Re, aims to create an automated system for classifying unstructured data. IBM's Unstructured Information Management architecture, which was open-sourced and is being maintained by the Apache Foundation, features support for multiple programming languages, with updates added periodically. OpenCog aims to provide research scientists and software developers with a common platform to build and share artificial intelligence programs. The framework already is in use in natural-language applications, both for research and commercial corporations, according to OpenCog's creators. The Open Advancement of Question Answering Systems program, jointly initiated by IBM and Carnegie Mellon University, aims to develop "open advancement in the engineering of question answering system--language software systems that provide direct answers to questions posed in natural language."
Artificial Intelligence Isn't a Threat--Yet
The Wall Street Journal (12/11/14) Gary Marcus
Luminaries including Stephen Hawking and Elon Musk recently have warned about the potential threat artificial intelligence (AI) poses to the human race in terms that strike some as fantastical. New York University professor Gary Marcus, CEO of Geometric Intelligence, says Hawking and Musk have a point, but the existential threat they fear is still many decades off and people face somewhat different threats from AI in the nearer term. Marcus says "superintelligent" machines are unlikely to arrive soon, but we are already in the process of placing a great deal of power and control in the hands of automated systems and need to be certain those systems can handle it. Marcus points to stock markets and autonomous driving technology as two examples of automated systems that could do tremendous damage if not properly and rigorously controlled. Although Marcus acknowledges such technologies have tremendous potential to do good, he says steps have to be taken to ensure they do not go haywire. Marcus says those steps could include funding advances in program verification and establishing laws surrounding the use of automated systems in specific, risky applications.
Aiming for 1 Million 'Girls Who Code'
CBS News (12/11/14)
Women are the majority in the workforce, in college, and as income earners, but they are being left out of innovating, says Reshma Saujani, who wants to introduce coding to more than 1 million girls over the next decade. She says it is still acceptable in society for girls to say they hate math at a time when technology is critical to everything that is created or built. Saujani is the founder of Girls Who Code, which has more than 150 clubs across the country teaching girls robotics, Web design, and mobile development. "What makes it interesting is, like, you are the one creating the game now, you're not just sitting there playing the game," says 17-year-old Aisha Soumaoro, who is enrolled in the club at Democracy Prep High School in Harlem. Saujani says technology has to be cool and work has to be fun for things to change. Girls Who Code already has nearly 3,000 alumni nationwide, and has attracted support from companies such as Facebook and Twitter. Saujani says she is still waiting on that "eureka moment" that will inspire girls to become more active participants in shaping a digital world. An estimated 1.4 million job openings are expected for computer specialists by 2010.
CCC BRAIN Workshop: Research Interfaces Between Brain Science and Computer Science
CCC Blog (12/11/14) Helen Vasaly
The Computing Community Consortium (CCC) and the U.S. National Science Foundation recently held a workshop on the interfaces between brain research and computer science. The workshop brought together several dozen neuroscientists and computer scientists with plenary speeches and panel discussions on topics including brain imaging and mapping, the brain and the body, computing, and data. Asked to give a "grand prediction" about the future intersection of neuroscience and computer science, speaker Jack Gallant predicted "some time in the near future we should be able to decode internal speech...which can then be read back to you from a speaker using fMRI responses." One major topic of discussion was how thoroughly researchers will have to understand the brain before that knowledge will enable them to start creating computer simulacrums of the brain. Another topic was how big data technology can help neuroscientists make sense of the mountains of data currently being generated on the brain. The video recordings of the workshop panels and plenary talks will be posted alongside the presentations on the CCC website in the coming weeks, with a workshop report forthcoming early next year.
Kent State Researchers to Study Social Media Use During Crises and Disasters
Kent State University (12/11/14) Jim Maxwell; Emily Vincent
Kent State University researchers have received a $300,000 U.S. National Science Foundation grant to study how human dynamics across social media and social networks can be modeled. The grant is part of a $999,887 collaboration with San Diego State University and the University of Arkansas. The researchers will use information diffusion, visualization, and simulations to study the public responses to disaster warnings and alerts, as well as the public opinions of controversial social topics at the state or national level. "The outcomes yielded from this research will assist in better designing and implementing disaster warnings and alerts as well as more efficient disseminating communications of political messages via social media and social networks," says Kent State professor Xinyue Ye. The researchers will create a prototype platform using social media to study how people respond and react to messages warning of inclement weather, earthquakes, wildfires, disease outbreaks, and evacuation orders. "The study may also allow government agencies to communicate more effectively to the public and be better prepared for both natural and human-made crises," Ye says. In addition, the social media analytic tools the researchers develop will be able to calculate how these messages are disseminated online and in social media and the outcomes of the referendum votes.
Researchers Will Study Police Confrontations Via Body Cameras
Technology Review (12/11/14) David Talbot
Researchers from the University of California, Los Angeles (UCLA) will study video and audio streams from body cameras used by one police department next year. The team will collect footage from 50 to 100 officers, and use software to categorize police work into tasks such as talking with citizens, walking, driving, and going into buildings. The researchers also will try to determine whether software might help detect when encounters with the public escalate and are then calmed by police officers. The study could help show why confrontations were prevented from getting out of hand. "That would be a huge benefit in terms of training," says UCLA's Jeff Brantingham, who previously co-founded PredPol, a startup that predicts where crimes are likely to occur. Brantingham will work with UCLA mathematician Angela Bertozzi. The constantly changing field-of-view of body cameras and nighttime lighting pose challenges to automating the analysis of video, according to UCLA's Song-Chun Zhu. Nighttime lighting can compound matters, and such video does not show the police officers, although their hands are visible in some cases.
Optical Illusions Fool Computers Into Seeing Things
New Scientist (12/11/14) Jacob Aron
University of Wyoming researchers are studying a particular type of image-recognition algorithm called a deep neural network (DNN), combined with a second algorithm designed to evolve different pictures. The algorithms, working in conjunction with human judgment, have previously created images of apples and faces, and the researchers wondered if replacing the human with a DNN, to work alongside the genetic algorithm, would work as well, resulting in a program that could generate creative pictures by itself. "We were expecting that we would get the same thing, a lot of very high-quality recognizable images," says University of Wyoming researcher Jeff Clune. "Instead, we got these rather bizarre images: a cheetah that looks nothing like a cheetah." The researchers used AlexNet, a DNN created by University of Toronto researchers in 2012. The researchers found the genetic algorithm produced images of seemingly random static, which AlexNet declared to be pictures of a variety of animals with more than 99-percent certainty. The algorithm's confusion is due to differences in how it sees the world compared with humans, according to Clune. "All optical illusions are kind of hacking the human visual system, in the same sense that our paper is hacking the DNN visual system to fool it into seeing something that isn't there," Clune says.
Using Robots to Get More Food From Raw Materials
SINTEF (12/10/14)
SINTEF researchers are developing Gribbot, a fully functional robot designed to automate the process of extracting breast fillets from chickens, a task normally performed by humans. "Our aim is to automate absolutely everything we can think of on the food production line," says SINTEF researcher Ekrem Misimi. He says the robot should make Norwegian food production more sustainable, both in terms of profitability and utilization of raw materials. "We at SINTEF are the only specialists in Norway to have focused on solving these kinds of problems for the food industry," Misimi says. Gribbot has a hand for grasping, specially developed fingers, and three-dimensional vision based on the Microsoft Kinect 2. The researchers also developed Gribbot's algorithm, which enables it to extract breast fillets as well as a human. Misimi notes the robot's camera and the robot itself must have the same vocabulary. "In other words, the robot's coordinate system must be able to understand the coordinates identified by the machine vision," he says. Gribbot was developed as part of a larger project called CYCLE, which aims to make Norwegian food production more profitable, more environmentally friendly, and more efficient. "Automating this work will speed up production and make it more efficient," Misimi says.
York Scientists Resolve Spin Puzzle
University of York (12/10/14) David Garner
University of York researchers have discovered the properties of defects in the atomic structure of magnetite, a breakthrough they say could potentially be used to produce more powerful electronic devices. Magnetite has many technological applications, including in spintronics, where it can be used to help develop more efficient and higher capacity memory devices. The breakthrough came in resolving the atomic-scale structure of the two-dimensional antiphase boundary defects (APBs) in the material. The researchers used theoretical modeling to predict the structure of the defects through a series of first principles calculations based on quantum mechanics. They then confirmed it using high-resolution transmission electron microscopy. The researchers found APB defects are unusually stable and cause antiferromagnetic coupling leading to reduced spin polarization. "Our study has predicted what the atomic structure of the defects should be and then it confirmed it using electron microscopy," says University of York researcher Keith McKenna. "We can now have confidence in making predictions about magnetite's electronic and magnetic properties, which will help optimization of the material. This will help the development of smaller more powerful electronic devices, particularly more efficient memory devices."
Twitter Posts May Shine a Fresh Light on Mental Illness Trends
Johns Hopkins University (12/09/14) Phil Sneiderman
Johns Hopkins University (JHU) researchers have developed a technique using Twitter to gather important information about some common mental illnesses. The researchers say they can quickly and inexpensively collect new data on post-traumatic stress disorder, depression, bipolar disorder, and seasonal affective disorder by reviewing tweets from users who publicly mentioned their diagnosis and by looking for language cues linked to certain disorders. The researchers want to work with treatment providers and public health officials to use software to analyze tweets in order to help address the slow pace and high costs associated with collecting mental heath data through surveys and other traditional methods. "We believe our new techniques could complement that process," says JHU researcher Glen Coppersmith. "We're trying to show that analyzing tweets could uncover similar results, but could do so more quickly and at a much lower cost." The researchers developed algorithms that discover mental health data from tweets by looking for words and language patterns associated with specific ailments. "Using Twitter to get a fix on mental health cases could be very helpful to health practitioners and governmental officials who need to decide where counseling and other care is needed most," says JHU professor Mark Dredze.
Google Opens Its Cloud to Crack the Genetic Code of Autism
Wired News (12/09/14) Marcus Wohlsen
Google is partnering with Autism Speaks, an autism advocacy group, to sequence the genomes of 10,000 people on the autism spectrum along with their family members. Google will host and index the data for researchers to analyze as they look for variations in DNA that could lead to autism's genetic origins. "We'd like to leverage the same kind of technology and approach to searching the Internet every day to search into the genome for these missing answers," says Autism Speaks' chief science officer Rob Ring. The project will utilize Google Genomics, a tool launched by the company several months ago, designed to enable researchers to easily and inexpensively process large data sets. As part of the projects, researchers can search for specific regions and sequences along genomes and find sections with common variations. In addition, because a single human genome can run to 100 gigabytes, having the data in a central location makes remote collaboration among researchers easier. "What matters most to us is that this research is going to allow us to uncover and understand the various forms of autism," says Autism Speaks' president Liz Feld.
Computer Scientists at UT Austin Crack Code for Redrawing Bird Family Tree
University of Texas at Austin (12/11/14) Marc Airhart
University of Texas (UT) at Austin researchers have developed statistical binning, a new computational technique that can produce an avian tree of life that points to the origins of various bird species. The four-year effort relied on supercomputers at UT Austin's Texas Advanced Computing Center (TACC) in order to extract insights on the timing of a "big bang" in bird evolution, rearrange evolutionary relationships between some bird species, and provide new insights on the origins of song pattern recognition in birds, as well as many other avian traits. The researchers sequenced the complete genomes of 48 living bird species, producing about 14,000 genomic regions per species. Earlier studies used about 200 bird species, but the UT Austin study used hundreds of times more genetic data per species, meaning the new bird family tree draws from far more data, resulting in some surprising findings. "This project is exciting because it shows that it's not just about being bigger and faster," says UT Austin computer science graduate student Siavash Mirarab. "Simply having more data doesn't make you more accurate. You have to come up with more intelligent ways to analyze your data." The entire effort to construct an avian evolutionary tree took 400 years of central-processing unit time and required the use of supercomputers at TACC, the Munich Supercomputing Center, and the San Diego Supercomputing Center.
Abstract News © Copyright 2014 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe
|