Association for Computing Machinery
Welcome to the February 3, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.

Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).

HEADLINES AT A GLANCE


At Berkeley, a New Digital Privacy Protest
The New York Times (02/01/16) Steve Lohr

A network breach at the University of California, Los Angeles medical center last summer prompted University of California president Janet Napolitano to deploy hardware and software in the university system's data centers that monitor patterns of digital traffic for signs of suspicious activity. A coalition of Berkeley professors says this move, while well intentioned, threatens academic freedom. In October, Napolitano's office released a short statement describing the new data-tracking program, known as the Coordinated Monitoring and Threat Response Initiative. An outside contractor is providing and operating the program's hardware and software, and the president's office has set up a Cyber-Risk Governance Committee for the program and similar initiatives. The coalition of Berkeley professors says the university system enacted monitoring largely in private, with little transparency about what data is being gathered. They warn tracking could compromise and limit academic freedom to research topics that some find objectionable. In response, the university said, "privacy perishes in the absence of security." The dispute embodies the overall challenge of balancing computer security needs and privacy, and experts note different environments have different rules, expectations, and threat levels.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


5 Futuristic Oddities From the Weird World of Wearable Tech
The Washington Post (02/01/16) Dominic Basulto

Concepts of human-machine interaction are undergoing a transformation thanks to advances in wearable technology. Robotic exoskeletons have made tremendous progress, with next-generation technology promising to do more than simply serve as rehabilitation tools; strength enhancement, new sensory input, and innovations such as the U.S. military's TALOS "Iron Man suit" to give soldiers superhuman abilities are some of the advancements on the horizon. Another category of wearable tech includes portable mind monitors and brain-computer interfaces designed to enable completely thought-driven actions. Meanwhile, "stealth wear" shows promise as a counter-surveillance tool, such as apparel made from reflective material that can block cameras and mobile tracking devices. Also notable are smart clothing and textiles, which are already making inroads in the fitness space by monitoring bodily performance. A more advanced example is the sensor-equipped Diffus Climate Dress, a garment that indicates carbon dioxide content in the air by creating light patterns in decorative light-emitting diodes woven into the dress. A fifth type of wearable tech with potential is the kind powered by the wearer, such as a prototype Samsung watchband that enables users to make phone calls by sticking their finger in their ear. The arm movement serves to accept an incoming call, while the watchband helps transmit sound vibrations through the finger into the ear via body conduction.
View Full Article - May Require Free Registration | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Building a Foundation for CS for All
National Science Foundation (01/30/16) Aaron Dubrow

The White House on Saturday announced the CS for All initiative, intended to give all students in the U.S. the opportunity to study computer science (CS), with the U.S. National Science Foundation (NSF) and the Department of Education leading the effort. As part of the initiative, NSF is pledging $120 million over five years to expedite its efforts to facilitate CS education in schools throughout the U.S., which could enable up to 9,000 additional high-school CS teachers. In addition, NSF and the Education Department will co-fund an initiative to prototype professional development for Career and Technical Education educators who teach CS. Collaboration between NSF and the U.S. Department of Defense on the National Math and Science Initiative will support the deployment of the new Advanced Placement CS Principles framework at 200 sites in the country. Moreover, scalable and sustainable professional development for high-school CS teachers will be supported via NSF/private-sector collaboration. "CS for All builds on NSF's investments in developing, piloting, and assessing materials and resources for computer science education, and on the growing momentum for science, technology, engineering, and mathematics education more broadly, that is taking hold across the country," says NSF director France Cordova.


Cynthia Breazeal's Robotic Quest
The Wall Street Journal (01/29/16) Alexandra Wolfe

Cynthia Breazeal, founder and director of the Personal Robots Group at the Massachusetts Institute of Technology's Media Lab, has committed herself to the development of socially intelligent robots. Recent products of her research include Jibo, designed to be a companion as well as an assistant, and Buddy, another companion robot that can remind people of appointments and events while also monitoring household operations. The guiding design principle of Breazeal's social robots is improving the machines' responses to people in ways that appear empathetic and emotional. She uses her projects to investigate how well robots can interpret people and whether humans will trust and feel comfortable engaging with such devices. Breazeal says she also explores the question, "what are these emotional, interpersonal interactions that set the foundation for all communication?" She does not think interactions with robots will ultimately replace human relations, but people could regard the robots as pets or supportive acquaintances. "I'm hoping not for a denigration of the human experience but almost a re-enlightenment and a re-appreciation of what it means to be human," Breazeal says. "How can we create technology to support who we are?"
View Full Article - May Require Paid Subscription | Return to Headlines | Share Facebook  LinkedIn  Twitter 


Moore's Law Goes Post-CMOS
EE Times (02/01/16) Rick Merritt

Although the economics of Moore's law are sound as long as the focus is on reducing cost per transistor, beyond complementary metal-oxide semiconductors (CMOS), going forward there will be changes in all aspects of computer architecture, according to Intel's William Holt, speaking this week at the International Solid-State Circuits Conference. He noted new techniques include tunneling field-effect transistors (FETs), ferroelectric FETs, spintronics, new III-V materials, and more. In general, Holt said, engineers will stretch CMOS as far as possible, but in the long term, chips will be hybrids of different techniques combined with traditional CMOS technology. "We will see a mixed mode operation...parts [of the wafer] with CMOS and new devices on same wafer optimized for different benefits," Holt said. The major challenge is that all of the post-CMOS technologies help reduce power consumption, but they also run significantly slower than CMOS circuits. Holt said the industry should focus on keeping power down and reducing cost per transistor. "It's too early to make a prediction on the details of the 7-nm node, but we can say we may be more in the range of the historical line of cost per transistor reduction at 7 nm--but we see a feasible path to cost reduction," he said.


Energy-Friendly Chip Can Perform Powerful Artificial-Intelligence Tasks
MIT News (02/03/16) Larry Hardesty

Massachusetts Institute of Technology researchers at this week's International Solid State Circuits Conference introduced a new chip that implements neural networks. The researchers say the chip is 10 times more efficient than a mobile graphical-processing unit (GPU), so it could enable mobile devices to run powerful artificial-intelligence algorithms locally, instead of uploading data to the Internet for processing. The Eyeriss chip has 168 cores, and its efficiency is rooted in its ability to minimize the frequency with which cores must exchange data with distant memory banks. Many of the cores in a GPU share a single, large memory bank, but each of the Eyeriss cores has its own memory. The chip also features a circuit that compresses data prior to sending it to individual cores. Each core also can communicate directly with its immediate neighbors, so if data sharing is necessary, they do not have to route it via main memory. In addition, the Eyeriss has special-purpose circuitry that allocates tasks across cores, which can be reconfigured for different types of networks. They can automatically distribute data manipulated by the nodes it is simulating and data describing the nodes themselves across cores in order to maximize the amount of work each core can perform before retrieving more data from main memory.


With STEM Degrees, It's Not the School That Matters
Computerworld (02/02/16) Patrick Thibodeau

The prestige of the school from which someone obtains a science, technology, engineering, and math (STEM) degree may make little difference in terms of how much money they earn, according to a new study examining STEM salaries 10 years after graduation. The researchers observed few salary differences among more than 7,000 STEM graduates, although students with liberal arts degrees from top schools made more than those from a less-prestigious institution. Brigham Young University professor Mark H. Showalter thinks standardization in science and engineering curriculums may be a factor in this trend, noting a difference in wages is observable by excluding test and income data from the salary assessment. Carla Brodley, dean of computer science at Northeastern University, says computer science graduates' wages are determined by their performance on the technical interview, which may include a coding test, when being hired. "It's not clear to me that higher-ranked schools prepare you better," she notes. Although JMJ Phillip Executive Search's Dennis Theodorou acknowledges a top-tier school will open up more job opportunities for undergrads, he says after a decade "it's really about the experience you have."


Professor, Grad Students Make 3D Scanning Accessible, Accurate
Brown Daily Herald (02/02/16) Olivia Katcher

Brown University researchers have developed an algorithm that can turn conventional smartphones and digital cameras into three-dimensional (3D) scanners. The Projector-Camera Calibration software includes a camera and a projector connected to a computer. The researchers created a model in which a projector displays sequencing patterns on the object being scanned, and for every pattern projected, a picture of the object is taken and then transferred to the computer. The projection of each pattern must be synchronized to the capture of its respective image in order to keep the image's pattern from becoming blurry. The synchronized 3D scanner creates high-definition models with a small number of images, and although there is no limit to the range of items that can be scanned with this method, the camera and the projector must be connected and synchronized in order for the system to work, according to the researchers. The algorithm "takes the images and calculates the new images, which are the ones that would be captured if the camera and projector were synchronized," says Brown University Ph.D. student Daniel Moreno. The researchers plan on building a mini projector about the size of a flash drive that will project patterns in sequence, as well as a smartphone app that will run the software for the 3D scanning.


How to Make Your Own NSA Bulk Surveillance System
Wired (01/27/16) Kim Zetter

Nicholas Weaver, a University of California, Berkeley (UC Berkeley) expert in network surveillance and security issues, set out to build a miniature bulk surveillance system capable of performing all the primary tasks of a U.S. National Security Agency (NSA) spy system after the Edward Snowden leak exposed the agency's XKEYSCORE program. Established around 2008, NSA's "widest reaching" surveillance program uses more than 700 servers to store data drawn from the Internet's backbone and mine the data for patterns and connections. Weaver, a senior researcher at the International Computer Science Institute at UC Berkeley, examined the documents and realized the spy agency was doing the same security research he has been doing for a decade. Moreover, Weaver realized he already had off-the-shelf equipment that met the criteria for emulating the NSA. Weaver says it took him about 50 hours to assemble his surveillance system, including writing about 600 lines of code for the intrusion detection component. He notes it would cost someone about $850 to build something like his 100-Mbps system, and there was little new about the surveillance technology. "When national security programs are hobby-level, you really have to worry that anybody else can do them," Weaver says.


Making our Digital World Safer From Cyberattacks
Imperial College London (01/27/16) Colin Smith

Researchers from the Singapore University of Technology and Design (SUTD), the National University of Singapore (NUS), and Imperial College London (ICL) are working to help ensure cybersecurity develops in conjunction with changes in technology and with emerging threats. In one project, the researchers are investigating new approaches for making infrastructure more secure from evolving cyberthreats. The team, co-led by ICL's Deeph Chana and SUTD's Aditya Mathur, is building computer models that will provide a testing ground for how these interconnected infrastructures will operate when faced with cyberthreats. "We are building model systems that will enable rapid, repeated simulations that represent realistic breaches in cybersecurity," Chana says. A second team, led by ICL professor Wolfram Wiesemann and NUS professor Xu Huan, will focus on developing safer methods for sharing confidential digital information that do not compromise the privacy rights of citizens and organizations. The researchers plan to develop a set of procedures and algorithms that enable private datasets to be analyzed without confidentiality breaches. In addition, the researchers aim to develop the underlying techniques in software that will make this shared data and analysis of results easily interpretable.


A Group of Scholars Look to Early 20th Century Radio Technology to Help Improve Internet Security
Stanford University (01/27/16) Andrew Myers

Stanford University researchers say they have developed a quantum light source that could serve as the basis for quantum communication. Quantum light is much weaker than the rest of the light coming from the type of modified laser used for quantum communication, so the researchers created a way to filter out the unwanted light, enabling the quantum signal to be read much more easily. The researchers note the filtering process is similar to the way noise-cancelling headphones operate, only with light instead of sound. Some of the light coming back from the modified laser is like noise, preventing researchers from seeing the quantum light, so the Stanford team canceled it out to reveal and emphasize the quantum signal hidden underneath. The researchers adapted an interference technique borrowed from 1930s-era radio engineering to cancel the unwanted classical light. First they determined what the "noise" looks like and played it back. The researchers adjusted how the canceling light and the classical light overlap, canceling out the unwanted light and revealing the quantum light. Stanford professor Jelena Vuckovic says this development "provides us with a practical pathway to secure quantum communications."


Delivering the Internet of the Future--at the Speed of Light and Open Sourced
University of Bristol News (01/26/16)

Researchers have found a new scientific solution to address the problem of rising bandwidth demands due to changing and diverse applications. The research by the High Performance Networks (HPN) group in the University of Bristol's Department of Electrical and Electronic Engineering calls for developing a new high-performance network infrastructure that is open and programmable and uses light to carry Internet traffic. The researchers say their approach introduces new concepts of open source optical Internet enabled by optical white box and software-defined network technologies. "The technologies suggested could pave the way for the creation of new Internet services and applications not previously possible or disruptive," says HPN group leader Dimitra Simeonidou. "The technologies could also potentially change the balance of power from vendors and operators that are monopolizing the current Internet infrastructure to wider users and service providers." HPN researcher Reza Nejabati notes, "these technologies will hide complexity of optical networks and open them up for traditional programmers and application developers to create new type of Internet applications taking advantages of speed of light."


Searching for the Algorithms Underlying Life
Quanta Magazine (01/28/16) John Pavlus

In an interview, Harvard University computer scientist and ACM A.M. Turing Award winner Leslie Valiant discusses his definition of an "ecorithm," or a learning algorithm that "runs" on any system capable of interacting with its physical environment. According to Valiant, ecorithms can apply to biological organisms or whole species, and the concept draws a computational parallel between how individuals learn and how ecosystems evolve. He says this notion also assumes a mechanistic perception of adaptive behavior. "An ecorithm is an algorithm, but its performance is evaluated against input it gets from a rather uncontrolled and unpredictable world," Valiant notes. "And its goal is to perform well in that same complicated world." He says machine learning is an effective way of circumventing the practical problems of simulating brain computations so true artificial intelligence can be realized. "If one has a more high-level computational explanation of how the brain works, then one would get closer to this goal of having an explanation of human behavior that matches our mechanistic understanding of other physical systems," he says. Valiant reasons if people understood the learning algorithms employed by the brain, this would provide mechanistic concepts much closer to human behavior, "and the explanations they would give as to why you do what you do would become much more plausible and predictive."


Abstract News © Copyright 2016 INFORMATION, INC.
Powered by Information, Inc.


To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe