Thursday, 8 March 2018

Novel 3-D printing method embeds sensing capabilities within robotic actuators



Integrating sensors within soft robots has been difficult in part because most sensors, such as those used in traditional electronics, are rigid. To address this challenge, the researchers developed an organic ionic liquid-based conductive ink that can be 3D printed within the soft elastomer matrices that comprise most soft robots.
"To date, most integrated sensor/actuator systems used in soft robotics have been quite rudimentary," said Michael Wehner, former postdoctoral fellow at SEAS and co-author of the paper. "By directly printing ionic liquid sensors within these soft systems, we open new avenues to device design and fabrication that will ultimately allow true closed loop control of soft robots."
Wehner is now an assistant professor at the University of California, Santa Cruz.
To fabricate the device, the researchers relied on an established 3D printing technique developed in the lab of Jennifer Lewis, the Hansjorg Wyss Professor of Biologically Inspired Engineering at SEAS and Core Faculty Member of the Wyss Institute. The technique -- known as embedded 3D printing -- seamlessly and quickly integrates multiple features and materials within a single soft body.
"This work represents the latest example of the enabling capabilities afforded by embedded 3D printing -- a technique pioneered by our lab," said Lewis.
"The function and design flexibility of this method is unparalleled," said Truby. "This new ink combined with our embedded 3D printing process allows us to combine both soft sensing and actuation in one integrated soft robotic system."
To test the sensors, the team printed a soft robotic gripper composed of three soft fingers or actuators. The researchers tested the gripper's ability to sense inflation pressure, curvature, contact, and temperature. They embedded multiple contact sensors, so the gripper could sense light and deep touches.
"Soft robotics are typically limited by conventional molding techniques that constrain geometry choices, or, in the case of commercial 3D printing, material selection that hampers design choices," said Robert Wood, the Charles River Professor of Engineering and Applied Sciences at SEAS, Core Faculty Member of the Wyss Institute, and co-author of the paper. "The techniques developed in the Lewis Lab have the opportunity to revolutionize how robots are created -- moving away from sequential processes and creating complex and monolithic robots with embedded sensors and actuators."
Next, the researchers hope to harness the power of machine learning to train these devices to grasp objects of varying size, shape, surface texture, and temperature.
The research was coauthored by Abigail Grosskopf, Daniel Vogt and Sebastien Uzel. It was supported it part by through Harvard MRSEC and the Wyss Institute for Biologically Inspired Engineering

Software aims to reduce food waste by helping those in need



"It is really heart wrenching to witness a mother in shabby and torn clothes, holding her baby, come to you and ask for help because her baby hasn't had anything to eat," Sharma said. "This I have witnessed often in my life."
Those interactions made an impression and heightened Sharma's awareness of hunger. When he moved to the U.S. in 2006 to continue his education, Sharma says he quickly recognized hunger was not just a problem in India. What he found most troubling was the amount of food wasted -- in the U.S. and India -- when so many people go without. After reading about elementary schools sending food packages home with students, Sharma decided to make hunger the primary focus of his research.
Given that 40 percent of the food produced in the U.S. is wasted, according to the USDA's Economic Research Service, Sharma wanted to find a way to divert excess food to those in need. It has taken almost three years for Sharma, a computer science expert and systems analyst in Iowa State University's Center for Survey Statistics and Methodology, and his collaborators to develop a software prototype -- eFeed-Hungers -- to do just that.
Ritu Shandilya, a third-year Ph.D. student in computer science; U. Sunday Tim, an associate professor of ag and biosystems engineering; and Johnny Wong, professor and associate chair of computer science, are all part of the research team. Their work is published online in the journals Resources, Conversation and Recycling, and Telematics and Informatics.
Making the connection
A program that distributes leftover food from catered events to the homeless in India inspired the vision for the online, interactive network, Sharma said. Restaurants, grocery stores and individuals can use the mobile-friendly software to post food they have to donate. Likewise, those in need can find nearby locations where food is available for pickup.
The researchers designed the software so donors take the food to a public place, such as a food pantry or church serving free meals, for pickup and distribution. It allows for one-time and recurring donations, so businesses or individuals do not have to enter their information repeatedly. Sharma says the interactive map makes it easy to search. Each location is marked with a flag to indicate the type of food, and hours it is available.
"We wanted to make it as simple as possible, so people will not hesitate to donate," Sharma said. "There is no scarcity of food. We see this as a way to take some of the food we're wasting and save it by providing a channel to get the extra food to the needy."
Test to implementation
Researchers continue to test the prototype and plan to launch the site for the Ames community in late summer or early fall. Sharma says they are working on funding to provide education and outreach for restaurants, food pantries, churches and residents interested in participating. Their goal is to add gradually other cities and regions that may benefit from the tool.
"Almost everyone has a cell phone and the technology has the potential for a much wider outreach," he said. "I don't know how successful we will be, but we're making an honest effort to tackle this problem. If we can help provide food for even one percent, we'll be happy."

Saturday, 10 February 2018

Android apps can conspire to mine information from your smartphone



Mobile phones have increasingly become the repository for the details that drive our everyday lives. But researchers have recently discovered that the same apps we regularly use on our phones to organize lunch dates, make convenient online purchases, and communicate the most intimate details of our existence have secretly been colluding to mine our information.
Researchers were aware that apps may talk to one another in some way, shape, or form," said Wang. "What this study shows undeniably with real-world evidence over and over again is that app behavior, whether it is intentional or not, can pose a security breach depending on the kinds of apps you have on your phone."
The types of threats fall into two major categories, either a malware app that is specifically designed to launch a cyberattack or apps that simply allow for collusion and privilege escalation. In the latter category, it is not possible to quantify the intention of the developer, so collusion, while still a security breach, can in many cases be unintentional.
In order to run the programs to test pairs of apps, the team developed a tool called DIALDroid to perform their massive inter-app security analysis. The study, funded by the Defense Advanced Research Projects Agency as part of its Automated Program Analysis for Cybersecurity initiative, took 6,340 hours using the newly developed DIALDroid software, a task that would have been considerably longer without it.
First author of the paper Amiangshu Bosu, an assistant professor at Southern Illinois University, spearheaded the software development effort and the push to release the code to the wider research community. Fang Liu, a fifth year Ph.D. candidate studying under Yao, also contributed to the malware detection research.
"Our team was able to exploit the strengths of relational databases to complete the analysis, in combination with efficient static program analysis, workflow engineering and optimization, and the utilization of high performance computing. Of the apps we studied, we found thousands of pairs of apps that could potentially leak sensitive phone or personal information and allow unauthorized apps to gain access to privileged data," said Yao, who is both an Elizabeth and James E. Turner Jr. '56 and L-3 Faculty Fellow.
The team studied a whopping 110,150 apps over three years including 100,206 of Google Play¹s most popular apps and 9,994 malware apps from Virus Share, a private collection of malware app samples. The set up for cybersecurity leaks works when a seemingly innocuous sender app like that handy and ubiquitous flashlight app works in tandem with a receiver app to divulge a user¹s information such as contacts, geolocation, or provide access to the web.
The team found that the biggest security risks were some of the least utilitarian. Apps that pertained to personalization of ringtones, widgets, and emojis.
"App security is a little like the Wild West right now with few regulations," said Wang. "We hope this paper will be a source for the industry to consider re-examining their software development practices and incorporate safeguards on the front end. While we can¹t quantify what the intention is for app developers in the non-malware cases we can at least raise awareness of this security problem with mobile apps for consumers who previosuly may not have thought much about what they were downloading onto their phones."

Artificial neural networks decode brain activity during performed and imagined movements


Filtering information for search engines, acting as an opponent during a board game or recognizing images: Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ideas from computer science could revolutionize brain research. In the scientific journal Human Brain Mapping they illustrate how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG).
It included performed movements, but also hand and foot movements that were merely thought of, or an imaginary rotation of objects. Even though the algorithm was not given any characteristics ahead of time, it works as quickly and precisely as traditional systems that have been created to solve certain tasks based on predetermined brain signal characteristics, which are therefore not appropriate for every situation.
The demand for such diverse intersections between human and machine is huge: At the University Hospital Freiburg, for instance, it could be used for early detection of epileptic seizures. It could also be used to improve communication possibilities for severely paralyzed patients or an automatic neurological diagnosis.
"Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds," says computer scientist Robin Tibor Schirrmeister. The researcher is using it to rewrite methods that the team has used for decoding EEG data: So-called artificial neural networks are the heart of the current project at BrainLinks-BrainTools. "The great thing about the program is we needn't predetermine any characteristics. The information is processed layer for layer, that is in multiple steps with the help of a non-linear function. The system learns to recognize and differentiate between certain behavioral patterns from various movements as it goes along," explains Schirrmeister. The model is based on the connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell's core and back again. "Theories have been in circulation for decades, but it wasn't until the emergence of today's computer processing power that the model has become feasible," comments Schirrmeister.
Customarily, the model's precision improves with a large number of processing layers. Up to 31 were used during the study, otherwise known as "Deep Learning." Up until now, it had been problematic to interpret the network's circuitry after the learning process had been completed. All algorithmic processes take place in the background and are invisible. That is why the researchers developed the software to create cards from which they could understand the decoding decisions. The researchers can insert new datasets into the system at any time. "Unlike the old method, we are now able to go directly to the raw signals that the EEG records from the brain. Our system is as precise, if not better, than the old one," says head investigator Tonio Ball, summarizing the study's research contribution. The technology's potential has yet to be exhausted -- together with his team, the researcher would like to further pursue its development: "Our vision for the future includes self-learning algorithms that can reliably and quickly recognize the user's various intentions based on their brain signals. In addition, such algorithms could assist neurological diagnoses."

Pioneering nanotechnology captures energy from people

This foldable keyboard, created by Michigan State University engineer Nelson Sepulveda and his research team, operates by touch; no battery is needed. Sepulveda developed a new way to harvest energy from human motion using a pioneering device called a biocompatible ferroelectret nanogenerator, or FENG.
Michigan State University engineering researchers have created a new way to harvest energy from human motion, using a film-like device that actually can be folded to create more power. With the low-cost device, known as a nanogenerator, the scientists successfully operated an LCD touch screen, a bank of 20 LED lights and a flexible keyboard, all with a simple touching or pressing motion and without the aid of a battery.
The groundbreaking findings, published in the journal Nano Energy, suggest "we're on the path toward wearable devices powered by human motion," said Nelson Sepulveda, associate professor of electrical and computer engineering and lead investigator of the project.
"What I foresee, relatively soon, is the capability of not having to charge your cell phone for an entire week, for example, because that energy will be produced by your movement," said Sepulveda, whose research is funded by the National Science Foundation.
The innovative process starts with a silicone wafer, which is then fabricated with several layers, or thin sheets, of environmentally friendly substances including silver, polyimide and polypropylene ferroelectret. Ions are added so that each layer in the device contains charged particles. Electrical energy is created when the device is compressed by human motion, or mechanical energy.
The completed device is called a biocompatible ferroelectret nanogenerator, or FENG. The device is as thin as a sheet of paper and can be adapted to many applications and sizes. The device used to power the LED lights was palm-sized, for example, while the device used to power the touch screen was as small as a finger.
Advantages such as being lightweight, flexible, biocompatible, scalable, low-cost and robust could make FENG "a promising and alternative method in the field of mechanical-energy harvesting" for many autonomous electronics such as wireless headsets, cell phones and other touch-screen devices, the study says.
Remarkably, the device also becomes more powerful when folded.
"Each time you fold it you are increasing exponentially the amount of voltage you are creating," Sepulveda said. "You can start with a large device, but when you fold it once, and again, and again, it's now much smaller and has more energy. Now it may be small enough to put in a specially made heel of your shoe so it creates power each time your heel strikes the ground."
Sepulveda and his team are developing technology that would transmit the power generated from the heel strike to, say, a wireless headset.

Detecting emotions with wireless signals

Measuring your heartbeat and breath, device can tell if you’re excited, happy, angry, or sad.


Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed "EQ-Radio," a device that can detect a person's emotions using wireless signals. By measuring subtle changes in breathing and heart rhythms, EQ-Radio is 87 percent accurate at detecting if a person is excited, happy, angry or sad -- and can do so without on-body sensors.
MIT professor and project lead Dina Katabi envisions the system being used in entertainment, consumer behavior, and health care. Film studios and ad agencies could test viewers' reactions in real-time, while smart homes could use information about your mood to adjust the heating or suggest that you get some fresh air.
"Our work shows that wireless signals can capture information about human behavior that is not always visible to the naked eye," says Katabi, who co-wrote a paper on the topic with PhD students Mingmin Zhao and Fadel Adib. "We believe that our results could pave the way for future technologies that could help monitor and diagnose conditions like depression and anxiety."
EQ-Radio builds on Katabi's continued efforts to use wireless technology for measuring human behaviors such as breathing and falling. She says that she will incorporate emotion-detection into her spinoff company Emerald, which makes a device that is aimed at detecting and predicting falls among the elderly.
Using wireless signals reflected off people's bodies, the device measures heartbeats as accurately as an ECG monitor, with a margin of error of approximately 0.3 percent. It then studies the waveforms within each heartbeat to match a person's behavior to how they previously acted in one of the four emotion-states.
The team will present the work next month at the Association of Computing Machinery's International Conference on Mobile Computing and Networking (MobiCom).
How it works
Existing emotion-detection methods rely on audiovisual cues or on-body sensors, but there are downsides to both techniques. Facial expressions are famously unreliable, while on-body sensors such as chest bands and ECG monitors are inconvenient to wear and become inaccurate if they change position over time.
EQ-Radio instead sends wireless signals that reflect off of a person's body and back to the device. Its beat-extraction algorithms break the reflections into individual heartbeats and analyze the small variations in heartbeat intervals to determine their levels of arousal and positive affect.
These measurements are what allow EQ-Radio to detect emotion. For example, a person whose signals correlate to low arousal and negative affect is more likely to tagged as sad, while someone whose signals correlate to high arousal and positive affect would likely be tagged as excited.
The exact correlations vary from person to person, but are consistent enough that EQ-Radio could detect emotions with 70 percent accuracy even when it hadn't previously measured the target person's heartbeat.
"Just by knowing how people breathe and how their hearts beat in different emotional states, we can look at a random person's heartbeat and reliably detect their emotions," says Zhao. For the experiments, subjects used videos or music to recall a series of memories that each evoked one the four emotions, as well as a no-emotion baseline. Trained just on those five sets of two-minute videos, EQ-Radio could then accurately classify the person's behavior among the four emotions 87 percent of the time.
Compared with Microsoft's vision-based "Emotion API," which focuses on facial expressions, EQ-Radio was found to be significantly more accurate in detecting joy, sadness, and anger. The two systems performed similarly with neutral emotions, since a face's absence of emotion is generally easier to detect than its presence.
One of the CSAIL team's toughest challenges was to tune out irrelevant data. In order to get individual heartbeats, for example, the team had to dampen the breathing, since the distance that a person's chest moves from breathing is much greater than the distance that their heart moves to beat.
To do so, the team focused on wireless signals that are based on acceleration rather than distance traveled, since the rise and fall of the chest with each breath tends to be much more consistent -- and, therefore, have a lower acceleration -- than the motion of the heartbeat.
Although the focus on emotion-detection meant analyzing the time between heartbeats, the team says that the algorithm's ability to captured the heartbeat's entire waveform means that in the future it could be used for non-invasive health monitoring and diagnostic settings.
"By recovering measurements of the heart valves actually opening and closing at a millisecond time-scale, this system can literally detect if someone's heart skips a beat," says Adib. "This opens up the possibility of learning more about conditions like arrhythmia, and potentially exploring other medical applications that we haven't even thought of yet."

Thursday, 11 January 2018

Largest known prime number discovered



The new prime number, also known as M77232917, is calculated by multiplying together 77,232,917 twos, and then subtracting one. It is nearly one million digits larger than the previous record prime number, in a special class of extremely rare prime numbers known as Mersenne primes. It is only the 50th known Mersenne prime ever discovered, each increasingly difficult to find. Mersenne primes were named for the French monk Marin Mersenne, who studied these numbers more than 350 years ago. GIMPS, founded in 1996, has discovered the last 16 Mersenne primes. Volunteers download a free program to search for these primes, with a cash award offered to anyone lucky enough to find a new prime. Prof. Chris Caldwell maintains an authoritative web site on the largest known primes, and has an excellent history of Mersenne primes.
The primality proof took six days of non-stop computing on a PC with an Intel i5-6600 CPU. To prove there were no errors in the prime discovery process, the new prime was independently verified using four different programs on four different hardware configurations.
  • Aaron Blosser verified it using Prime95 on an Intel Xeon server in 37 hours.
  • David Stanfill verified it using gpuOwL on an AMD RX Vega 64 GPU in 34 hours.
  • Andreas Höglund verified the prime using CUDALucas running on NVidia Titan Black GPU in 73 hours.
  • Ernst Mayer also verified it using his own program Mlucas on 32-core Xeon server in 82 hours. Andreas Höglund also confirmed using Mlucas running on an Amazon AWS instance in 65 hours.
Jonathan Pace is a 51-year old Electrical Engineer living in Germantown, Tennessee. Perseverance has finally paid off for Jon -- he has been hunting for big primes with GIMPS for over 14 years. The discovery is eligible for a $3,000 GIMPS research discovery award.
GIMPS Prime95 client software was developed by founder George Woltman. Scott Kurowski wrote the PrimeNet system software that coordinates GIMPS' computers. Aaron Blosser is now the system administrator, upgrading and maintaining PrimeNet as needed. Volunteers have a chance to earn research discovery awards of $3,000 or $50,000 if their computer discovers a new Mersenne prime. GIMPS' next major goal is to win the $150,000 award administered by the Electronic Frontier Foundation offered for finding a 100 million digit prime number.
Credit for this prime goes not only to Jonathan Pace for running the Prime95 software, Woltman for writing the software, Kurowski and Blosser for their work on the Primenet server, but also the thousands of GIMPS volunteers that sifted through millions of non-prime candidates. In recognition of all the above people, official credit for this discovery goes to "J. Pace, G. Woltman, S. Kurowski, A. Blosser, et al."
The Great Internet Mersenne Prime Search (GIMPS) was formed in January 1996 by George Woltman to discover new world record size Mersenne primes. In 1997 Scott Kurowski enabled GIMPS to automatically harness the power of thousands of ordinary computers to search for these "needles in a haystack." Most GIMPS members join the search for the thrill of possibly discovering a record-setting, rare, and historic new Mersenne prime. The search for more Mersenne primes is already under way. There may be smaller, as yet undiscovered Mersenne primes, and there almost certainly are larger Mersenne primes waiting to be found. Anyone with a reasonably powerful PC can join GIMPS and become a big prime hunter, and possibly earn a cash research discovery award.