Welcome to the January 22, 2016 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
The Mobile Phone of the Future Will be Implanted in Your Head
CNet (01/19/16) Marguerite Reardon
Many industry leaders think the world is entering a time of momentous changes involving software, according to a World Economic Forum survey. The expected changes include technology related to artificial intelligence, Internet-connected devices, three-dimensional (3D) printing, and phones implanted in users' heads. "Computers and other digital advances are doing for mental power--the ability to use our brains to understand and shape our environments--what the steam engine and its descendants did for muscle power," says Erik Brynjolfsson, director of the Massachusetts Institute of Technology's Initiative on the Digital Economy. For example, experts say embeddable devices implanted in the body that use wireless technology could be commercially available by 2023. Although there are likely to be many benefits to such technology, there also are concerns about privacy, government surveillance, and revolutionizing communication. Survey respondents also predict that by 2026, 10 percent of cars on U.S. roads will be driverless. The survey identified 21 "tipping points" for technologies that may sound futuristic but are just a few years away from mass adoption, including Internet-connected reading glasses and the first transplant of a 3D-printed liver.
New Biggest Prime Number = 2 to the 74 Mil...Uh, It's Big
The New York Times (01/21/16) Kenneth Chang
Using otherwise idle computers, the 20-year-old Great Internet Mersenne Prime Search (Gimps) project has found the largest known prime number, which is nearly 5 million digits longer than the previous record-holder, which had about 17 million digits. The project concerns itself with Mersenne primes, which can be written in the form 2n - 1, where n is an integer. Primes occur less frequently as integers get bigger, and only 49 total Mersenne primes are known. Gimps functions through the actions of volunteers, who download free software that runs unobtrusively on computers when they are not in use. University of Central Missouri professor Curtis Cooper has installed the software on 800 university PCs, which were responsible for finding three other Mersenne primes in addition to the latest one. Although prime numbers are essential to fields such as cryptography, the newest one is so big it has no practical application for the time being.
Japan Road Tests Self-Driving Cars to Keep Aging Motorists Mobile
The Wall Street Journal (01/21/16) Mike Ramsey; Miho Inada; Yoko Kubota
Japan's automakers aim to meet the challenge of aging drivers with few transportation options by testing self-driving vehicles on roads. The Japanese government is allocating about $16.3 million annually to develop maps and other technologies needed for automated driving so in four years Japan could offer driverless vehicles to transport visitors and athletes to and from venues at the 2020 Olympics. Nissan Motor and Renault CEO Carlos Ghosn says government support could ease regulatory challenges and enable Japan to commercialize autonomous cars ahead of everyone else. Kanazawa University has developed a prototype self-driving Toyota Prius, using the city of Suzu's roads as a testbed. Suzu official Naoyuki Kaneda envisions autonomous technology finding use in buses or taxis. However, researchers and carmakers must address the high cost of the gear needed to make cars autonomous, which can include radar, cameras, laser-range finders, and computer processors. Refining how the vehicles can operate in inclement weather with poor visibility is another challenge. Toyota recently invested $1 billion in an artificial intelligence operation based in Silicon Valley to aid its driverless car effort.
U.S. Military Wants to Create Cyborg Soldiers
Computerworld (01/21/16) Sharon Gaudin
The U.S. Defense Advanced Research Projects Agency (DARPA) is working to create a chip that can be implanted in a soldier's brain to connect it directly to computers that can deliver data on an enemy's position, maps, and battle instructions. The implanted chip would essentially create soldier cyborgs that would be safer and better fighters. DARPA announced this week it formed a new program to develop a neural interface that would create "unprecedented signal resolution and data-transfer bandwidth" between the human brain and the digital world." Although there are already some neural interfaces approved for use with humans, DARPA wants to improve the technology so the system can communicate clearly and individually with up to 1 million neurons in a specific region of the brain. DARPA's research project will involve scientists from various fields, including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging, and manufacturing. The researchers will work on advanced mathematical and neuro-computation techniques to translate between biological-electromechanical language and digital binary code. The military hopes to find industry partners for the research, offering prototyping and manufacturing services and intellectual property.
How an AI Algorithm Learned to Write Political Speeches
Technology Review (01/19/16)
University of Massachusetts, Amherst (UMass Amherst) researchers have developed an artificial intelligence (AI) machine that learned how to write political speeches that are very similar to real speeches. The researchers used a database of nearly 4,000 political speech segments from 53 U.S. Congressional floor debates to train a machine-learning algorithm to produce speeches of its own. The researchers also categorized the speeches by political party and by whether it was in favor of or against a given topic. The researchers designed the system using an approach based on n-grams, which are sequences of "n" words or phrases. The researchers analyzed the text using a parts-of-speech approach, which tags each word or phrase with its grammatical role before looking at 6-grams and the probability of a word or phrase appearing given the five that appear before it. UMass Amherst researcher Valentin Kassarnig says the technique "allows us to determine very quickly all words which can occur after the previous five ones and how likely each of them is." The speech-writing software explores the 6-gram database for a specific category of speech to find the entire set of 5-grams that have been used to start those speeches. The algorithm then chooses one of these 5-grams at random to start its speech.
NYU Wireless to Build Test Bed for 5G
Campus Technology (01/20/16) Leila Meyer
New York University's NYU Wireless, a multidisciplinary academic research center for wireless networking theories and techniques, is collaborating with SiBEAM and National Instruments to build an advanced programmable platform to support the development of 5G wireless technologies based on millimeter wave (mmWave) wireless communication. The software-defined radio platform will help researchers rapidly design, prototype, and validate key technologies for the mmWave radio spectrum, and "will be one of the first of its kind available to researchers from academia, government, and industry who are driving the early stages of mmWave technology," according to the university. The researchers note 5G technology potentially could support data connection speeds exceeding 10 gigabits per second, which is 1,000 times faster than current 4G data rates. In addition, the mmWave spectrum could "provide 200 times the capacity of all of today's cellular spectrum allocations," the university notes. The project is being funded in part by a nearly $100,000 grant from the U.S. National Science Foundation, and NYU Wireless will release the system to other university and industry groups to speed the development of mmWave technology.
MIT's Food Computer: The Future of Urban Agriculture?
IEEE Spectrum (01/20/16) Mark Anderson
Technologists at the Massachusetts Institute of Technology (MIT) are developing an open source, digitized food-growing system, a concept they say has the potential to retool food production to accommodate high-density urban living. The Food Computer (FC) uses robotic systems and actuated climate-, energy-, and plant-sensing mechanisms to create a controlled environment. "Not unlike climate-controlled data centers optimized for rows of servers, FCs are designed to optimize agricultural production by monitoring and actuating a desired climate inside of a growing chamber," says Caleb Harper, principal research scientist for the Open Agriculture (OpenAG) Initiative at MIT's Media Lab. Sensors would monitor the growing conditions for the crop and fine-tune the light exposure, temperature, humidity, carbon dioxide level, water cycle, and nutrient exposure according to a preset recipe for growing the plant. The recipes for each crop, the controlling software, and sensing data would be freely circulated among FC users for tweaking and improvement. OpenAG envisions a tabletop FC, as well as a walled walk-in module the size of a shipping container, and a warehouse-scale unit.
Robots Could Make the Supreme Court More Transparent
The Atlantic (01/20/16) Adrienne LaFrance
Massachusetts Institute of Technology graduate William Li and colleagues have built an algorithm designed to determine which U.S. Supreme Court justices wrote unsigned opinions. Known in the legal world as unsigned decisions, they are meant to correct glaring errors, and Li says courts arguably abuse the veil of anonymity. The team began its work in 2012, amid rumors that Chief Justice John Roberts changed his mind at the last minute on the Affordable Care Act. Li and colleagues used a combination of statistical data mining and machine learning to glean the individual writing style of each justice from the opinions they signed over the years. The algorithm picked up on the justices' distinct signatures--words and phrases used and the way they structured sentences--and the researchers tested the accuracy of its findings by showing it signed opinions but withheld authorship. They report the bot was correct 81 percent of the time. Although the researchers say with enough data a robot justice could issue decisions in the style of individual justices, Li notes "the justice Turing test, if you will, might be difficult to solve."
This Smartphone Technology 3D Maps Your Meal and Counts Its Calories
University of Washington News and Information (01/19/16) Jennifer Langston
University of Washington (UW) researchers have developed NutriRay3D, a laser-mapping technology and smartphone app that enables users to point a smartphone at a plate of food and get an accurate count of the total calories and nutrition. NutriRay3D uses laser-based three-dimensional (3D) reconstruction techniques to calculate the caloric content of 9,000 types of food. In initial user studies, the system estimated nutritional content with between 87.5-percent and 91-percent accuracy. A laser accessory connects to the smartphone and projects a grid of dots onto a plate or bowl and calculates the volume of food; the measurements are used to estimate the nutritional content of a particular piece of food. "It creates a 3D map based on where the dots align, and then you can put them all together to get the actual volume of the food, and estimating these portion sizes is where average people and other calorie counting methods often fall short," says UW researcher Sep Makhsous. The app crosschecks the measurements against a database of foods to calculate the nutritional content on the plate. In addition, the NutriRay3D app can identify basic foods on its own but enables users to either speak into the phone or manually enter details about more complicated meals.
New Methods for More Energy-Efficient Internet Services
Umea University (Sweden) (01/19/16) Ingrid Soderbergh
Umea University researcher Mina Sedaghat has developed techniques and algorithms to manage and schedule the resources in large data centers at less cost, as well as more efficiently and reliably, and with lower environmental impact. Sedaghat's research includes methods and techniques to efficiently use the servers in the data centers, so the load associated with the information generated by nearly 1 billion Internet users can be served with fewer resources. "It could be optimized scheduling systems packing several software components into a few servers in a way that makes full use of processors, memory, bandwidth, network capacity, and other resources," Sedaghat says. In this way, energy efficiency can be improved, and the negative environmental impact and the operational costs can be reduced, Sedaghat notes. The technology was developed in conjunction with researchers at the Royal Institute of Technology and Google.
British Voice Encryption Protocol Has Massive Weakness, Researcher Says
Network World (01/19/16) Jeremy Kirk
A protocol developed by the Communications-Electronics Security Group (CESG), the information security arm of the U.K.'s Government Communications Headquarters (GCHQ), for encrypting voice calls has a weakness built into it by design, according to Steven Murdoch, a researcher at University College London. He says the weakness in CESG's Multimedia Internet KEYing-Sakai-KasaharaKey Encryption protocol could enable mass surveillance. The protocol's key escrow approach calls for a master decryption key to be held by a service provider. "The existence of a master private key that can decrypt all calls past and present without detection, on a computer permanently available, creates a huge security risk, and an irresistible target for attackers," Murdoch says. He notes the approach also makes the data of users more vulnerable to legal action, such as secret court orders. "This is presented as a feature rather than bug, with the motivating case in the GCHQ documentation being to allow companies to listen to their employees calls when investigating misconduct, such as in the financial industry," Murdoch points out. The UK government has often expressed concern over how encryption could inhibit law enforcement and impact terrorism-related investigations, and Murdoch says the government only certifies voice encryption products that use the protocol.
SUNY Poly Researchers Gets $1.2 Million to Make Chips that Mimic Brain
Albany Times Union (NY) (01/19/16) Larry Rulison
State University of New York (SUNY) Polytechnic Institute professor Nathaniel Cady has received a $1.2-million grant from the U.S. Air Force Research Lab to create a new generation of computer chips that mimic the human brain. The grant is part of a larger, $2.4-million research project Cady is conducting with researchers at the University of Tennessee, Knoxville. He notes the new microprocessors use memristors because they are able to surpass many of the limitations of conventional silicon-based chips. For example, memristors could be more easily implanted into everyday devices. In addition, memristors can expand the binary code used by today's computer chips because they can deal with incremental resistance, allowing values beyond 1 and 0, which would allow for chips to handle more complex data sets. In addition, Cady notes memristors can hold their memory after power has been shut off, enabling the devices to turn on instantly. "We look forward to collaborating with the research team at (Tennessee) to substantially improve computer memory," he says.
What Does the Future Hold for Artificial Intelligence?
CIO (01/21/16) Thor Olavsrud
Andrew Moore, dean of Carnegie Mellon University's School of Computer Science, notes artificial intelligence (AI) research at the school is focused on advancements such as poker-playing AIs, which have to be exponentially more refined than chess-playing AIs because poker is a game that deals with hidden information. Moore believes teaching AIs to process situations with hidden data will open up a new spectrum of applications. He cites as examples projects to improve digital assistants such as Apple's Siri and Microsoft's Cortana by enabling them to work in circumstances in which they must withhold information. Moore also thinks the technology could support blind negotiations in which no opportunity exists for rigging the system. In addition, he predicts significant near-term growth for AI testing of autonomous or learning systems. Moore also envisions opportunities for AI to advance self-driving vehicles that take over from human motorists to avoid accidents, although ethics and philosophy must be added to the equation. "It is a wonder possibility that a car can be very critically controlled while crashed to reduce the loss of human life," he notes. Moore also considers AI's ability to track micro-expressions having major ramifications for healthcare, mental illness therapy, entertainment, and education.
Abstract News © Copyright 2016 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.