Welcome to the January 8, 2018 edition of ACM TechNews, providing timely information for IT professionals three times a week.
ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version.
The online version now has a button at the top labeled "Show Headlines."
|
|
Scots Researchers Give Hope for Parkinson's Drug Side Effects
The National (Scotland) Kirsteen Paterson January 8, 2018
Computer scientists at Heriot-Watt University in Scotland have developed a method for detecting a condition caused as a side-effect of a treatment for Parkinson's disease, which could be used to target medication more accurately. About 90 percent of Parkinson's disease patients treated with dopamine replacement drugs over 10 years reported symptoms of dyskinesia (abnormality or impairment of voluntary movement), and the researchers created an algorithm for detecting the condition and have conducted clinical studies to prove it is reliable in identifying it. The team is using its study to develop a new home-monitoring device for patients that will help caregivers adapt and improve their treatment. The researchers conducted two clinical studies of patients who had all displayed evidence of dyskinesia. The studies enabled the team to capture and mine data about how patients move and used those to build models, says Heriot-Watt professor Michael Lones. He notes the algorithm functions by building a mathematical equation describing patterns of acceleration that are characteristic of dyskinesia.
|
Vision Teacher
Technical University of Darmstadt (Germany) January 5, 2018
Researchers at the Technical University of Darmstadt (TU Darmstadt) in Germany are teaching intelligent algorithms to detect cars, pedestrians, and potentially dangerous objects in x-ray images from transportation security. The software reconstructs image information that may be hidden by blurred or out-of-focus images, and it also could assist users in tedious tasks. To develop the system, the researchers photographed a real scene and separated the individual objects from each other by tracing their outlines. The researchers enabled the algorithms to work with less data by relying on computer games that show deceptively realistic street scenes. "Based on the information contained in the computer game, we can detect which object that is already known re-appears at a later point in time," says TU Darmstadt professor Stefan Roth. He notes this technique means the object no longer needs to be re-annotated on each video frame.
|
Largest Prime Number Ever Found Has Over 23 Million Digits
New Scientist Timothy Revell January 4, 2018
Tennessee electrical engineer Jonathan Pace has discovered the largest-ever prime number, known as M77232917, as part of the Great Internet Mersenne Prime Search (GIMPS), a collaborative project using globally distributed volunteers' computers. M77232917 is comprised of more than 23 million digits, and the number also is unique by being a Mersenne prime, or one less than a power of two. There currently are only 50 known Mersenne primes, with GIMPS credited for finding the last 16. GIMPS participants search for large Mersenne primes by downloading a free program, and Pace's computer ran for six days to find M77232917, after which four other computers confirmed the result. Pace's discovery may suggest Mersenne primes occur more frequently than previously thought, or that there is a random clump closer together than anticipated. The finding of Mersenne primes also is accelerating due to growing computing power and improvements to software.
|
Cybersecurity in Self-Driving Cars
University of Michigan News Susan Carney; Nicole Casal Moore January 4, 2018
Researchers at the University of Michigan (U-M), working with Mcity, the nation's largest public-private partnership working to advance connected and automated mobility, have developed the Mcity Threat Identification Model, a tool to help scientists analyze the likelihood of potential cybersecurity threats that must be overcome before autonomous and connected vehicles can be widely adopted. The new model outlines a framework for considering various factors, including an attacker's skill level and motivation, vulnerable vehicle system components, ways in which an attack could be achieved, and the repercussions. Andre Weimerskirch, who leads Mcity's cybersecurity working group, says the tool can serve as a blueprint to effectively identify and analyze cybersecurity threats and create effective approaches to making autonomous vehicle systems safe and secure. The researchers used the model to examine vulnerabilities in automated parking, and found the most likely attacks are a mechanic disabling the range sensors in park-assist or remote parking in order to require additional maintenance, and an expert hacker sending a false signal to a vehicle's receiver to deactivate remote parking.
|
New Penn State Data Center Crunches Big Numbers
Penn State News Krista Weidner January 4, 2018
Pennsylvania State University (PSU) has launched a new data center that researchers can use to analyze massive amounts of information and complex models that used to be too slow or impossible to manage. The facility hosts 23,500 computer cores, which PSU's Jenni Evans says enables world-class computation in an energy-efficient and economical way. One research group is preparing to use a computer cluster called the Cyber-Laboratory for Astronomy, Materials, and Physics to "create much more sophisticated simulations with much greater realism," says PSU's Eric Ford. The data center's computing power will help computer scientists, astronomers, physicists, and materials scientists better understand planetary masses and orbits and will predict where to look for planets that might be habitable. Researchers also will work on the Program on Coupled Human and Earth Systems, a multi-institution initiative to develop tools to assess how stresses in a natural or human system affect other systems, such as energy infrastructure, water supply, and food production.
|
Deep Learning Sharpens Views of Cells and Genes
Scientific American Amy Maxmen January 4, 2018
Researchers at Google are using deep-learning convolutional neural networks to analyze retinal photos to predict a person's blood pressure, age, and smoking status, and a preliminary study suggests these networks can use this information to predict an individual's risk of an impending heart attack. The research is one of several deep-learning applications that are boosting the simplicity and versatility of image processing. Meanwhile, biologists at the Allen Institute for Cell Science are using convolutional neural networks to render flat, gray images of cells captured with light microscopes as three-dimensional images with colored organelles so cellular staining is unnecessary. "What you're seeing now is an unprecedented shift in how well machine learning can accomplish biological tasks that have to do with imaging," says Anne Carpenter at the Broad Institute of the Massachusetts Institute of Technology and Harvard University. Researchers also envision convolutional neural network-based image analysis as a method for inadvertently uncovering subtle biological phenomena.
|
Machine Learning: The Good, the Bad, and the Ugly
Government Computer News Matt Leonard January 4, 2018
Although machine-learning technology is still in its early stages, the U.S. Intelligence Advanced Research Projects Activity (IARPA) has been studying machine learning since 2006. Some of its early efforts include the Biometrics Exploitation Science and Technology program, which developed tools for facial recognition that have since been widely adopted. Meanwhile, other projects focused on natural-language processing and formed the basis for the use of machine learning in more complex applications, such as predicting cyberattacks based on conversations in hacker forums and the market price of malware, forecasting military mobilization and terrorism, and developing accurate three-dimensional models of buildings from satellite imagery. IARPA also is studying ways of improving neural networks, the fundamental architecture upon which machine learning is built. For example, the Machine Intelligence from Cortical Networks program aims to reverse-engineer the algorithms of the brain, while the agency also is starting to examine how quantum computing will affect machine learning.
|
Engineers Make Wearable Sensors for Plants, Enabling Measurements of Water Use in Crops
Iowa State University News Service Mike Krapfl January 3, 2018
Researchers at Iowa State University (ISU) have developed graphene-based sensors-on-tape that can be attached to plants to collect data for scientists and farmers. The researchers' "plant tattoo" sensor was created with a new process for fabricating intricate graphene patterns on tape. The first step is creating indented patterns on the surface of a polymer block, either with a molding process or with three-dimensional printing, and then applying a liquid graphene solution to the block, filling the indented patterns. Tape is used to remove the excess graphene, and another strip of tape is used to pull away the graphene patterns, creating a sensor on the tape. The researchers say their method can produce precise patterns as small as 5 millionths of a meter wide, which increases the sensitivity of the sensors. "The plant sensors are so tiny they can detect transpiration from plants, but they won't affect plant growth or crop production," says ISU professor Liang Dong.
|
Tailoring Cancer Treatments to Individual Patients
Texas Advanced Computing Center Aaron Dubrow January 3, 2018
Researchers at the University of Texas at Austin (UT Austin) Center for Computational Oncology are designing tumor growth models and predicting therapeutic results based on patient-specific conditions. "The models have parameters in them that are agnostic, and we try to make them very specific by populating them with measurements from individual patients," says UT Austin's Thomas Yankeelov. His team has demonstrated a way of predicting how gliomas (brain tumors) will grow with more accuracy than previous models, by including variables such as the mechanical forces exerted on the cells and the tumor's cellular heterogeneity. One key to this breakthrough is the Occam Plausibility Algorithm, which chooses the most plausible model for a given dataset and determines the model's validity for predicting cancer growth and morphology. The team analyzes patient-specific data from imaging studies and feeds it to the model, with tumor response factors represented by mathematical equations. They then use Texas Advanced Computing Center supercomputers to accelerate model generation.
|
Purdue Researchers Say Human Body Next Hub for Transmission of Data
Batesville Herald Tribune (IN) Scott L. Miley January 2, 2018
Researchers at Purdue University say they have developed a Human Body Communication system that uses a person's inner electric signals to transmit data to a network with other electronic devices, and the system is alternately implantable or external. "The human body is going to be the next [data transmission] hub," predicts Purdue professor Shreyas Sen. Although the human body picks up interference from outside sources, the Human Body Communication system blocks those signals and enables communication between devices. Sen says the system "sends the signal inside your body, it travels throughout your body, and it comes out of your body to the other device. With that in mind, when you handshake with somebody else you are forming a contact because my hand is conductive and your hand is conductive." Sen also notes using the body can prevent the interception of data transmissions because signals are not sent through the hackable medium of airwaves.
|
Thinking Machines Going Mainstream
SIGNAL Magazine George I. Seffers January 1, 2018
Experts predict cognitive computing will eventually become normalized as a routine behavioral component in any newer systems. "It will be an expectation of the users that this assistive, interactive, iterative role that it plays within decision making becomes the norm," says Cognitive Computing Consortium co-founder Sue Feldman. She believes more interactive technology will be "able to return answers or graphs or whatever is necessary in an iterative manner with the people who are using it," while also being contextual. Meanwhile, consortium co-founder Hadley Reynolds expects an acceleration in the blurring of boundaries between technological devices and other objects, including apparel and everyday appliances, to the point where they vanish completely. Feldman and Reynolds agree big data will continue to fuel advances in cognitive computing, with a major possibility of a new profession stemming from cognitive computing and artificial intelligence. The consortium co-founders alternately cite ethics and trust as the most pressing remaining challenges for cognitive computing.
|
MIT Expert on the Future of AI
ITProPortal January 8, 2018
In an interview, Massachusetts Institute of Technology professor Vivienne Sze discusses a new course she has launched that seeks to integrate algorithmic design with computer hardware design. "In other words, helping people understand how to jointly design both the algorithms and the hardware so they are better suited for deep learning," Sze says. She sees the course's objective as giving students the knowledge to concurrently realize high accuracy while also fulfilling energy and speed requirements. Sze lists five factors requiring consideration in developing a practical and feasible deep-learning system: chip cost, accuracy, programmability, throughput/latency, and energy/power. "As [artificial intelligence] innovation continues, people must understand there are also limitations that, if not addressed, could prevent us from realizing the full potential of deep learning," Sze says. She notes more efficient systems are the key to supporting complex neural networks on Internet-of-things-enabled devices, such as smartphones, wearables, drones, and smart cars that have limited energy, storage, and computational resources.
|
|