Welcome to the September 23, 2015 edition of ACM TechNews, providing timely information for IT professionals three times a week.
Updated versions of the ACM TechNews mobile apps are available for Android phones and tablets (click here) and for iPhones (click here) and iPads (click here).
HEADLINES AT A GLANCE
Do MOOCs Help?
Inside Higher Ed (09/23/15) Carl Straumsheim
Massive open online courses (MOOCs) from Coursera can yield tangible benefits to learners, according to a new longitudinal study of open online learning outcomes published in Harvard Business Review. The University of Pennsylvania/University of Washington study concentrated on what students have gained by completing MOOCs and whether they can work to the advantage of participants in a low socioeconomic level, says the University of Pennsylvania's Ezekiel J. Emanuel. The researchers conducted a poll of 51,954 Coursera MOOC students, 58 percent of which were male, 69 percent had either a bachelor's or a master's degree, and 58 percent were employed full time. Fifty-two percent of respondents said they enrolled for professional advancement, versus 28 percent who cited educational advancement. One third reported gaining tangible career benefits compared to 18 percent who said they had tangible educational benefits. In the former category, 62 percent felt more ready for their jobs after completing a MOOC, while 43 percent thought the MOOCs made them more competitive job applicants. Twenty-six percent said the biggest benefit they got was landing a new job. In addition, low-income, under- or unemployed learners without bachelor's degrees were more likely to point to benefits from MOOCs.
Instead of Robots Taking Jobs, AI May Help Humans Do Their Jobs Better
Computerworld (09/22/15) Sharon Gaudin
Oregon State University professor Tom Dietterich sees vast potential for collaboration between humans and artificial intelligence (AI), contrary to popular assumptions that AI will replace people in jobs. "Each of us would have an AI assistant that we would train in our lives and the two of us, together, would be employed," he speculates. "This is where we can see super-human performance coming from the combination of the human and the computer." Dietterich cites current examples of human/smart machine cooperation, including accelerated three-dimensional modeling of an HIV enzyme. Among the areas he anticipates will rapidly adopt human/AI collaboration are high-speed stock trades, automated surgical assistants, and autonomous weapons. University of California, Berkeley professor Trevor Darrell expects the capabilities of smart devices to multiply by orders of magnitude within the next five to 10 years, while the U.S. Defense Advanced Research Projects Agency's Pam Melroy goes further, envisioning an AI-biology convergence. "There's something about human-machine communication symbiosis and how humans and machines can partner well together," she says. Melroy notes a smart prosthetic appendage to replace a lost limb is one possible example of this intersection, but realizes it will require advances in thought-controlled devices.
The Hit Charade
Technology Review (09/22/15) Will Knight
Computers' inability to understand or appreciate music is the reason why Apple Music and similar businesses are enlisting humans to organize playlists, despite the advancements that recommendation algorithms have undergone in recent years. These programs use statistical techniques to parse listener data and make an educated guess as to what people might like; accounting for human taste is beyond their capabilities. One of the more inventive playlist algorithms from Spotify employs vast volumes of data to make tailored recommendations. Spotify's Chris Johnson says the firm collects the maximum amount of data possible on a user's listening habits, and then compares it with data culled from other users. Opinions on new music posted to blogs, social media, and news websites also are fed into recommendations to further personalize playlists. However, these methods cannot overcome the algorithms' inability to suggest new songs, given the lack of data to show how much other listeners appreciate them. In 2014, Spotify started training a deep-learning network to identify frequency features of an audio signal in millions of songs so it could classify new songs, but it cannot arrange songs creatively and it would be confused by new musical styles. Some experts suggest focusing on creativity could offer a feasible measure of machine intelligence.
What Will Personal Computers Look Like in 20 Years' Time?
Wired.co.uk (09/18/15) Gian Volpicelli
Experts offer their views of how personal computers (PCs) will evolve over the next two decades, with Affectiva's Rana El-Kaliouby predicting PCs will have a chip that will read the user's emotions from sensor input as well as prompt actions in response to those emotions. University of Sussex professor Winfried Hensinger expects future systems to solve problems at ultra-fast speeds, as data processing will occur not on a personal device, but on a remote, powerful, cloud-linked machine. Meanwhile, Harvard University's Charles M. Lieber anticipates a direct integration between three-dimensional nanoelectronics and the human brain to the degree that computing will be neurally operated. Frog Design's Denise Gershbein thinks PCs will be characterized by more distributed points of access within 10 years. "Your 'device' will be your digital identity, used like a key to enter the system from any number of screens and actions," she speculates. Finally, Andy Adamatzky, director of the University of the West of England's Unconventional Computing Center, envisions the emergence of intra-personal and intra-cellular personal computing, which eventually gives rise to a united network spanning all living creatures. "Each human neuron will be hijacked by a self-growing, self-repairing, molecular network," he says. "Computers will be networks of polymer filaments growing inside and together with a human."
Forget the Turing Test--There Are Better Ways of Judging AI
New Scientist (09/21/15) Jacob Aron
Despite the media furor over reports last year that a chatbot had "passed" the Turing test, most artificial intelligence (AI) researchers no longer view the test, first outlined by Alan Turing more than half a century ago, as particularly useful. "There has been a shift to trying to replicate these more fundamental abilities on which intelligence is built," says University of Massachusetts, Amherst professor Erik Learned-Miller. He notes the more fundamental abilities include technologies such as computer vision, which he studies. Learned-Miller says significant gains have been made in the field, with some programs now performing as well or better than humans on certain tests of their ability to recognize faces and objects. Other researchers are examining AI's ability to play complex games such as poker, another area in which programs have had recent success. However, even these gains are small steps toward true intelligence. Olga Rusakovsky, a postdoctoral research fellow at the Carnegie Mellon Robotics Institute, says although individual AIs are making progress in certain visual tests, they remain a far cry from an intelligent machine. "To show true intelligence, machines will have to draw inferences about the wider context of an image and what might happen one second after a picture was taken," Rusakovsky says.
Sensors You Can Swallow Could Be Made of Nutrients and Powered by Stomach Acid
IEEE Spectrum (09/21/15) Neil Savage
Carnegie Mellon University (CMU) researchers are working on designs for an ingestible sensor that would combine silicon circuitry and nutrients and could be powered by stomach acid. One of the major hurdles when designing ingestible sensors is convincing regulators they would be safe. The approach of Christopher Bettinger's team at CMU is to use organic and biodegradable materials that are already considered safe to ingest. They envision silicon logic circuits encapsulated in a biodegradable hydrogel, which would enable it to squeeze through tight openings. The antennas and electronics would be made of small amounts of digestible minerals such as manganese, magnesium, and copper. In addition, the silicon Bettinger's team proposes using to power the logic circuits of their ingestible sensors can be converted by the body into silicic acid. The sensor would be powered by a battery with a cathode made of melanin and an anode made of manganese oxide. When the battery reaches the stomach, acidic gastric juices would act as an electrolyte and transport current. During testing, the design has been able to provide 5 milliwatts of power for up to 20 hours. The researchers say ingestible sensors could be used to study the microbiome, look for infections, and monitor medication uptake.
3D Computer Chips Could Be 1,000 Times Faster Than Existing Ones
LiveScience (09/20/15) Tia Ghose
Stanford University researchers say they have developed a method of designing and building computer chips that could lead to processing speeds at least 1,000 times faster than conventional chips. The method relies on carbon nanotubes (CNTs) and enables scientists to build the chip in three dimensions, a design that interweaves memory and processors in the same space. Reducing the distance between these two components can dramatically reduce the time computers take to do their work, according to Stanford doctoral student Max Shulaker. CNTs have electrical properties similar to those of conventional silicon transistors. In a head-to-head competition between a silicon transistor and a CNT transistor, the latter would go faster and use less energy, according to Shulaker. However, CNTs grow in a disorderly manner, and the researchers solved this problem by growing them in narrow grooves, guiding them into alignment--but only 99.5 percent of them actually became aligned. The researchers overcame this by drilling holes at certain spots within the chip to remove the wayward CNTs. Shulaker says the new three-dimensional design significantly reduced the transit time between transistor and memory, and the resulting architecture can produce computing speeds up to 1,000 times faster than would otherwise be possible. The researchers have used the new architecture to build a variety of sensor wafers, and the next step is to build even bigger, more complicated chips.
Rock Paper Scissors Robot Wins 100-Percent of the Time
Extreme Tech (09/18/15) Graham Templeton
A robot that can challenge the best human players in a game of Rock Paper Scissors shows the potential of future human-machine interfaces. Japanese researchers report the Janken Robot beat human players 100-percent of the time, although it did not "win" by the rules of the game. There were three strategies the robot followed, and the worst produced a winning move about 0.02 seconds after the human's move, while the fastest was almost instantaneous. Janken uses a high-speed camera and electronic reflexes to identify the oncoming shape of the human opponent's hand and play the corresponding move to beat it. In all scenarios, the robot is technically waiting to see the opponent's move before deciding on its own, which is cheating. The robot needs specialized backgrounds and lighting conditions to see and react to movement fast enough to fool human perception. The ability to beat people at this game essentially relies on the ability to see and react to movement fast enough to fool human perception, which is a very important threshold. Real-time robot response to human movement could have applications in military and industrial exoskeletons, for example.
Not Even the People Who Write Algorithms Really Know How They Work
The Atlantic (09/18/15) Adrienne LaFrance
Not only are the algorithms that determine what people see on the Web--search results, status updates, or product recommendations--inscrutable to users, the engineers that develop the Web's underlying robots also do not know exactly how they work. Andrew Moore, dean of computer science at Carnegie Mellon University, notes machine-learning models train themselves by using vast amounts of information from previous people. However, Moore says it is becoming increasingly difficult to know the processes machine-learning models use and the data they collect. The data can range from the color of the pixels on a movie poster to the physical proximity to other people who enjoyed the movie. The bits of information a machine-learning model might analyze and prioritize could include 2,000 data points or 100,000. Moore says we are "moving away from, not toward the world where you can immediately give a clear diagnosis" for what a data-fed algorithm is doing with a person's Web behaviors. "You might be overestimating how much the content-providers understand how their own systems work," Moore says. As machine-learning systems become more complex than ever, they also could potentially hurt people. For example, they could inadvertently use a piece of information that leads to a loan rejection.
UTA Computer Scientist to Develop Software Engineering Methods That Ensure Good Upgrades
University of Texas at Arlington (09/17/15) Herb Booth
University of Texas at Arlington (UTA) researchers are using a $174,634 grant from the U.S. National Science Foundation to develop software engineering methods they say will enable safe upgrades of cyber-physical systems in the energy domain. "This research will develop new software engineering techniques implemented in software tools to automatically detect if cyber-physical upgrades are unsafe, and then attempt to mitigate those unsafe effects at either design time or runtime if they are detected," says UTA professor Taylor T. Johnson. He says the research could result in more reliable and safer cyber-physical systems, which will enable software and systems engineers to design better, more efficient cyber-physical systems in the future because it eases verification and validation processes. The findings will be evaluated in an energy cyber-physical systems testbed, specifically an electrical distribution microgrid. Johnson notes these types of microgrids are quickly becoming popular to interface with direct current-producing renewables such as photovoltaics. "Johnson's research could save countless hours and have a positive financial impact on the energy industry," says Khosrow Behbehani, dean of the UT Arlington College of Engineering.
Inside USC's Crazy Experimental VR Lab
The Verge (09/17/15) Adi Robertson
The University of Southern California's Institute for Creative Technologies (ICT) was launched in 1999 with a $45-million grant from the U.S. Army and since then the technologies and techniques developed there have made a significant impact in several realms. For example, the Wide5 virtual reality (VR) headset developed by a company started by Mike Bolas, head of ICT's Mixed Reality Lab, helped to inspire the current rush to develop commercial VR headsets. ICT also employed Oculus founder Palmer Luckey for a time after he inquired about the Wide5. In addition, ICT researchers created the Light Stage, a geodesic dome containing lights and cameras used to create three-dimensional replicas of human subjects, which have been used behind the scenes in films such as "Avatar" and "The Curious Case of Benjamin Button." The technology earned its creators an Academy Award. Researcher Albert Rizzo developed a VR-based treatment for soldiers suffering from post-traumatic stress disorder at ICT, in part drawing from the video game "Full Spectrum Warrior," which also got its start at ICT. Current projects at the lab include developing systems that enable drones to track and interact with moving objects, and Project BlueSpark, a joint initiative between ICT and the U.S. Navy to create a VR simulation of a ship's bridge.
NSF Grant Allows Researchers to Explore Use of Co-Robots in Teaching
Penn State News (09/16/15) Pamela Krewson Wertz
Pennsylvania State University (PSU) professors Conrad Tucker and Timothy Brick have been awarded a National Robotics Initiative Grant of $342,574 from the U.S. National Science Foundation. Tucker says they are co-principal investigators on a project intended to "lead to a better understanding of how students interact and function with co-robots during potentially stressful activities." He says the purpose of the research is to examine whether repetitious cycles of observation, inference, and intervention by co-robot systems enhance and improve students' moods and their performance of tasks in an engineering lab setting. As part of their research, the team will obtain facial, auditory, and body gesture data from students using the integrated visual, audio, and depth sensory system of the co-robot. They will then make statistical inferences of students' affective states based on machine-learning classification of facial and body language data. The next step is to use the visual feedback display of the co-robot systems to present students with visual instructions and commentary intended to augment their affective state and improve their performance on laboratory tasks. "The co-robot systems proposed in this work will help close this knowledge gap by uncovering the correlations that exists between students' affect and task performance," Tucker says.
Who Is Driving Then?
Technical University of Darmstadt (Germany) (09/16/15) Jutta Witte
In an interview, professors Hermann Winner and Walther Wachenfeld from the Technical University of Darmstadt's Automotive Engineering research group discuss autonomous driving's risks, challenges, and opportunities. Winner thinks highly automated vehicles will be driving on certain routes relatively soon, although he does not envision autonomous cars in regular traffic everywhere at all times for at least 30 years. "The driver will have to take over the steering wheel if necessary, but may also deal with other things, for example process emails without paying attention to traffic--whilst the system prompts acceptance," Wachenfeld says. He stresses that, unlike current driver-intervention protocols, highly automated vehicles must be able to make an emergency stop and then always apply this when required. Winner believes autonomous driving can lead to a further reduction of accidents, but he cautions "any new system which has an influence on traffic produces new problems. It is, however, important that the end result is positive." Winner acknowledges there is still uncertainty as to whether there are limits to a machine's reactive ability in comparison to a person's, but he emphasizes "caution will be the main criterion, especially in the infancy of autonomous driving." Winner notes defensive autonomous driving will likely introduce weaknesses, such as the inability to proficiently anticipate traffic.
Abstract News © Copyright 2015 INFORMATION, INC.
To submit feedback about ACM TechNews, contact: [email protected]
Current ACM Members: Unsubscribe/Change your email subscription by logging in at myACM.
Non-Members: Unsubscribe
|