Welcome to the December 4, 2017 edition of ACM TechNews, providing timely information for IT professionals three times a week.

ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

To view "Headlines At A Glance," hit the link labeled "Click here to view this online" found at the top of the page in the html version. The online version now has a button at the top labeled "Show Headlines."

Squeezing Light Into a Tiny Channel, illustration Squeezing Light Into a Tiny Channel Brings Optical Computing a Step Closer
Imperial College London
Hayley Dunning
November 30, 2017


Researchers at Imperial College London in the U.K. have paved the way for computers based on light rather than electronics by forcing light to go through a smaller gap than previously possible, reducing the distance over which light can interact by 10,000-fold. Light used for processing on microchips previously was made to interact using particular materials, but only over relatively long distances. The researchers say this breakthrough means what previously would have taken centimeters to accomplish can now be realized on the micrometer scale, bringing optical processing into the range of electrical transistors. "Because light does not easily interact with itself, information sent using light must be converted into an electronic signal, and then back into light," says Imperial College London's Michael Nielsen. "Our technology allows processing to be achieved purely with light." The researchers achieved this by using a metal channel to focus the light inside a polymer previously used in solar panels.

Full Article
Lobachevsky University Scientists in Search of Fast Algorithms for Discrete Optimization
Lobachevsky University
December 1, 2017


Researchers at Lobachevsky University (UNN) in Russia are launching a project seeking fast algorithms for discrete optimization, which could be used in the research and development of complex network structures for use in telecommunications and information technology. The researchers note there currently are no algorithms capable of solving many polynomial solvable problems. The challenges of building polynomial algorithms and studying the effective solvability boundary give rise to numerous mathematical problems, which the UNN research team is striving to address. These problems include the study of the intersection of an integer lattice with a polyhedron, with an emphasis on the description of the faces of the convex hull of this intersection. Other examples include the study of boundary classes of graphs, increasing graphs, constructing effective algorithms and corresponding lower bounds for solving a number of problems on graph classes, and building algorithms for solving discrete optimization problems, such as minimizing quasiconvex functions on an integer lattice.

Full Article

mosquito Global Mosquito-Sensing Network Being Built Using Smartphones
Technology Review
November 30, 2017


Researchers at the University of Oxford in the U.K. say they have developed MozzWear, a low-cost smartphone sensor system to identify and monitor mosquito populations. The team says the system exploits the fact that mosquito species can be identified by the noise their wings make, and MozzWear records mosquito noises, the time, and their location, sending the data to a central server where the species can be identified. The team trained a machine-learning algorithm to recognize the characteristic acoustic signatures of different species and then identify the insects accordingly. The researchers were able to train the machine-learning algorithm with recordings of mosquitoes collected by the U.S. Centers for Disease Control and Prevention and the U.S. Army Military Research Unit in Kisumu, Kenya. The team found its system accurately detects the Anopheles species of mosquito about 72 percent of the time, thus representing a useful proof-of-principle demonstration.

Full Article

mechanical analog computer Not Your Father's Analog Computer
IEEE Spectrum
Yannis Tsividis
December 1, 2017


Significant work is underway in analog computing in the context of machine learning, machine intelligence, and biomimetic circuits, an important consideration as digital computers approach their efficiency limits, writes Columbia University professor Yannis Tsividis. He notes one of his colleagues built an analog computer on a single chip, and more recently a Columbia team developed a second-generation single-chip analog computer. "All blocks in our device operated at the same time, processing signals in a way that in the digital realm would require a highly parallel architecture," Tsividis says. He notes the system uses power more efficiently and connects more easily with digital computers, offering analog for high-speed approximate computations and digital for high-speed programming, storage, and computation. Tsividis cites the chip's circuit design for its ability to continuously compute arbitrary mathematical functions via analog-to-digital and digital-to-analog converters. The chip's analog core supports direct interfacing with sensors and actuators, while its high speed enables real-time engagement with users in complex computations.

Full Article

AI Index, illustration Stanford-Led Artificial Intelligence Index Tracks Emerging Field
Stanford News
Andrew Myers
November 30, 2017


A Stanford University-led team called the AI100 has launched the AI Index, the first index to track the state of artificial intelligence (AI) and measure technological progress in the same way the gross domestic product and the S&P 500 indices track the U.S. economy and stock market, respectively. The new index has revealed a 14-fold increase in AI startups and a six-fold gain in investment since 2000, as well as significant improvements in the technology's ability to mimic human performance. The AI Index monitors and quantifies at least 18 separate vectors in academia, industry, open source software, and public interest, in addition to technical evaluations of progress toward "human-level performance" in areas such as speech recognition and computer vision. Specific metrics in the index include assessments of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency, and media mentions. Still, Stanford professor Yoav Shoham, who received the ACM AAAI Allen Newell Award for 2012, notes a five-year-old's general intelligence remains beyond AI's capabilities.

Full Article
Virtual Reality for Bacteria
IST Austria
November 30, 2017


Researchers at the Institute of Science and Technology Austria (IST Austria) have developed a method for controlling the behavior of individual bacteria by connecting them to a computer and setting up an organic-digital genetic circuit. The experiment involves the researchers making gene expression in bacteria oscillate and controlling the patterns of oscillation by adjusting digital communication between individual bacteria. Modified E.coli cells produce a protein that fluoresces blue-violet, forming the interface with the digital side. Every six minutes, the computer measures how much light the cell generates and collects a virtual signal molecule in proportion to it; when the signal exceeds a certain threshold, production of the fluorescent protein is deactivated, linking the digital component back to the organic parts of the circuit. "The cells are interacting with the simulated environment," says IST Austria's Remy Chait. "What they do influences what the computer does and what the computer does influences the reaction of the cells."

Full Article
With 'Material Robotics,' Intelligent Products Won't Even Look Like Robots
Oregon State University News
Steve Lundeberg
November 29, 2017


Researchers at Oregon State University (OSU), the Swiss Federal Institute of Technology in Lausanne, and Yale University subscribe to a vision of robotics in which intelligence is embedded within their very matter to the point where they no longer resemble robots, a concept they have named "material-enabled robotics." OSU professor Yigit Menguc describes "shoes that are able to intelligently support your gait, change stiffness as you're running or walking, or based upon the surface you're on or the biomechanics of your foot" as one potential incarnation of material-enabled robotics. The researchers also note composite material production methods designed to match functional biological tissue's complexity currently are following two paths: new materials synthesis and system-level integration of material elements. "The convergence of these approaches will ultimately yield the next generation of material-enabled robots," Menguc predicts. "It's a natural partnership that will lead to robots with brains in their bodies--inexpensive and ever-present robots integrated into the real world."

Full Article

A nanotransistor made of graphene A Nanotransistor Made of Graphene
Empa
Karin Weinmann
November 29, 2017


Researchers at the Swiss Federal Laboratories for Materials Science and Technology (EMPA) in Switzerland, the Max Planck Institute for Polymer Research in Germany, and the University of California, Berkeley grew graphene ribbons exactly nine atoms wide with a regular armchair edge from precursor molecules. These molecules are evaporated in an ultra-high vacuum, and after several process steps are combined on a gold base to form the desired nanoribbons of about one nanometer in width and up to 50 nanometers long. The researchers also integrated the graphene ribbons into nanotransistors, but to have the desired properties, the dielectric layer of silicon oxide needed to be 50 nanometers thick, which in turn influenced the behavior of the electrons. The researchers were able to massively reduce this layer by using hafnium oxide instead of silicon oxide as the dielectric material; this change made the layer only 1.5 nanometers thick and the "on"-current is orders of magnitude higher.

Full Article
'Magnetoelectric' Material Shows Promise as Memory for Electronics
University of Wisconsin-Madison
Renee Meiller
November 29, 2017


Researchers at the University of Wisconsin-Madison (UW-Madison) have developed a unique process for making high-quality magnetoelectric material. Researchers previously studied magnetoelectric properties using very complex materials, which lack uniformity, while the new process, developed by UW-Madison professor Chang-Beom Eom, involves using atomic "steps" to guide the growth of a homogeneous, single-crystal thin film of bismuth ferrite. The researchers then added cobalt, which is magnetic, and on the bottom they placed an electrode made of strontium ruthenate. "We found that in our work, because of our single domain, we could actually see what was going on using multiple probing, or imaging, techniques," Eom notes. He also says the new homogeneous material enabled the researchers to answer important scientific questions about how magnetoelectric cross-coupling happens, and it also could enable manufacturers to improve their electronics. "Now we can design a much more effective, efficient, and low-power device," Eom notes.

Full Article

Scaling Deep Learning for Science Scaling Deep Learning for Science
Oak Ridge National Laboratory
Jonathan Hines
November 28, 2017


Researchers at the Oak Ridge National Laboratory (ORNL) used the Cray XK7 Titan supercomputer to develop Multinode Evolutionary Neural Networks for Deep Learning (MENNDL), an algorithm for generating custom neural networks that match or top the performance of artificial intelligence systems. The team notes MENNDL is designed to assess, evolve, and optimize neural networks for novel datasets, and it can concurrently test and train thousands of potential networks for a science problem. "What we did is use this evolutionary algorithm on Titan to find the best hyperparameters for varying types of datasets," says ORNL's Steven Young. The algorithm's use of the Titan's graphics processing unit computing power also has enabled the production of the auto-generated networks within hours instead of months. "To really leverage the machine, we set up MENNDL to generate a queue of individual networks to send to the nodes for evaluation as soon as computing power becomes available," Young says.

Full Article

New Research Creates a Computer Chip That Emulates Human Cognition New Research Creates a Computer Chip That Emulates Human Cognition
YaleNews
William Weir
November 28, 2017


Researchers at Yale University and IBM have developed TrueNorth, a chip that contains about 5.4 billion transistors and 1 million "neurons" that communicate via 256 million "synapses." TrueNorth was developed as part the U.S. Defense Advanced Research Project Agency's (DARPA) Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program. The TrueNorth chip achieves greater complexity with much less energy consumption by having all of its functions work asynchronously and in parallel, similar to how neuroscientists believe the brain operates. "To achieve the ambitious metrics of DARPA SyNAPSE, a key element was to design and implement event-driven circuits for which asynchronous circuits are natural," says IBM's Dharmendra Modha, who received the ACM Gordon Bell Prize for 2009. The researchers also developed a multi-object detection and classification application to test TrueNorth. One challenge was to detect people, bicyclists, cars, trucks, and buses that appear periodically on a video, and another was correctly identifying each object; the researchers say TrueNorth proved adept at both tasks.

Full Article
Florida Research Team Examines How Use of Sonar Can Thwart Voice Spoofing
Tech Xplore
Nancy Owano
November 26, 2017


Researchers at Florida State University has developed VoiceGesture, an app that uses a smartphone's sonar detector to monitor the user's lip movements as words are spoken for authentication, which could help thwart voice spoofing. VoiceGesture uses the smartphone as a Doppler radar, transmitting a barely audible, high-pitched 20-kilohertz acoustic signal from its speaker and listening for reflections at the microphone when the user speaks a passphrase. The system performs "liveness" detection by extracting features in the Doppler shifts caused by the unique articulatory gestures when a user speaks; these gestures include the movements of the lips, jaw, and tongue. The researchers found the new system achieves more than 99-percent detection accuracy at about 1-percent Equal Error Rate. "During the online authentication process, the extracted features of a user input utterance are compared against the ones in the system," note the researchers. They say if the extracted features produce a similarity score higher than a predefined threshold, a live user is declared.

Full Article
ACM Career & Job Center
 
ACM Conferences
 

Association for Computing Machinery

2 Penn Plaza, Suite 701
New York, NY 10121-0701
1-800-342-6626
(U.S./Canada)



ACM Media Sales

If you are interested in advertising in ACM TechNews or other ACM publications, please contact ACM Media Sales or (212) 626-0686, or visit ACM Media for more information.

To submit feedback about ACM TechNews, contact: [email protected]