Saturday 10 February 2018

Android apps can conspire to mine information from your smartphone



Mobile phones have increasingly become the repository for the details that drive our everyday lives. But researchers have recently discovered that the same apps we regularly use on our phones to organize lunch dates, make convenient online purchases, and communicate the most intimate details of our existence have secretly been colluding to mine our information.
Researchers were aware that apps may talk to one another in some way, shape, or form," said Wang. "What this study shows undeniably with real-world evidence over and over again is that app behavior, whether it is intentional or not, can pose a security breach depending on the kinds of apps you have on your phone."
The types of threats fall into two major categories, either a malware app that is specifically designed to launch a cyberattack or apps that simply allow for collusion and privilege escalation. In the latter category, it is not possible to quantify the intention of the developer, so collusion, while still a security breach, can in many cases be unintentional.
In order to run the programs to test pairs of apps, the team developed a tool called DIALDroid to perform their massive inter-app security analysis. The study, funded by the Defense Advanced Research Projects Agency as part of its Automated Program Analysis for Cybersecurity initiative, took 6,340 hours using the newly developed DIALDroid software, a task that would have been considerably longer without it.
First author of the paper Amiangshu Bosu, an assistant professor at Southern Illinois University, spearheaded the software development effort and the push to release the code to the wider research community. Fang Liu, a fifth year Ph.D. candidate studying under Yao, also contributed to the malware detection research.
"Our team was able to exploit the strengths of relational databases to complete the analysis, in combination with efficient static program analysis, workflow engineering and optimization, and the utilization of high performance computing. Of the apps we studied, we found thousands of pairs of apps that could potentially leak sensitive phone or personal information and allow unauthorized apps to gain access to privileged data," said Yao, who is both an Elizabeth and James E. Turner Jr. '56 and L-3 Faculty Fellow.
The team studied a whopping 110,150 apps over three years including 100,206 of Google Play¹s most popular apps and 9,994 malware apps from Virus Share, a private collection of malware app samples. The set up for cybersecurity leaks works when a seemingly innocuous sender app like that handy and ubiquitous flashlight app works in tandem with a receiver app to divulge a user¹s information such as contacts, geolocation, or provide access to the web.
The team found that the biggest security risks were some of the least utilitarian. Apps that pertained to personalization of ringtones, widgets, and emojis.
"App security is a little like the Wild West right now with few regulations," said Wang. "We hope this paper will be a source for the industry to consider re-examining their software development practices and incorporate safeguards on the front end. While we can¹t quantify what the intention is for app developers in the non-malware cases we can at least raise awareness of this security problem with mobile apps for consumers who previosuly may not have thought much about what they were downloading onto their phones."

Artificial neural networks decode brain activity during performed and imagined movements


Filtering information for search engines, acting as an opponent during a board game or recognizing images: Artificial intelligence has far outpaced human intelligence in certain tasks. Several groups from the Freiburg excellence cluster BrainLinks-BrainTools led by neuroscientist private lecturer Dr. Tonio Ball are showing how ideas from computer science could revolutionize brain research. In the scientific journal Human Brain Mapping they illustrate how a self-learning algorithm decodes human brain signals that were measured by an electroencephalogram (EEG).
It included performed movements, but also hand and foot movements that were merely thought of, or an imaginary rotation of objects. Even though the algorithm was not given any characteristics ahead of time, it works as quickly and precisely as traditional systems that have been created to solve certain tasks based on predetermined brain signal characteristics, which are therefore not appropriate for every situation.
The demand for such diverse intersections between human and machine is huge: At the University Hospital Freiburg, for instance, it could be used for early detection of epileptic seizures. It could also be used to improve communication possibilities for severely paralyzed patients or an automatic neurological diagnosis.
"Our software is based on brain-inspired models that have proven to be most helpful to decode various natural signals such as phonetic sounds," says computer scientist Robin Tibor Schirrmeister. The researcher is using it to rewrite methods that the team has used for decoding EEG data: So-called artificial neural networks are the heart of the current project at BrainLinks-BrainTools. "The great thing about the program is we needn't predetermine any characteristics. The information is processed layer for layer, that is in multiple steps with the help of a non-linear function. The system learns to recognize and differentiate between certain behavioral patterns from various movements as it goes along," explains Schirrmeister. The model is based on the connections between nerve cells in the human body in which electric signals from synapses are directed from cellular protuberances to the cell's core and back again. "Theories have been in circulation for decades, but it wasn't until the emergence of today's computer processing power that the model has become feasible," comments Schirrmeister.
Customarily, the model's precision improves with a large number of processing layers. Up to 31 were used during the study, otherwise known as "Deep Learning." Up until now, it had been problematic to interpret the network's circuitry after the learning process had been completed. All algorithmic processes take place in the background and are invisible. That is why the researchers developed the software to create cards from which they could understand the decoding decisions. The researchers can insert new datasets into the system at any time. "Unlike the old method, we are now able to go directly to the raw signals that the EEG records from the brain. Our system is as precise, if not better, than the old one," says head investigator Tonio Ball, summarizing the study's research contribution. The technology's potential has yet to be exhausted -- together with his team, the researcher would like to further pursue its development: "Our vision for the future includes self-learning algorithms that can reliably and quickly recognize the user's various intentions based on their brain signals. In addition, such algorithms could assist neurological diagnoses."

Pioneering nanotechnology captures energy from people

This foldable keyboard, created by Michigan State University engineer Nelson Sepulveda and his research team, operates by touch; no battery is needed. Sepulveda developed a new way to harvest energy from human motion using a pioneering device called a biocompatible ferroelectret nanogenerator, or FENG.
Michigan State University engineering researchers have created a new way to harvest energy from human motion, using a film-like device that actually can be folded to create more power. With the low-cost device, known as a nanogenerator, the scientists successfully operated an LCD touch screen, a bank of 20 LED lights and a flexible keyboard, all with a simple touching or pressing motion and without the aid of a battery.
The groundbreaking findings, published in the journal Nano Energy, suggest "we're on the path toward wearable devices powered by human motion," said Nelson Sepulveda, associate professor of electrical and computer engineering and lead investigator of the project.
"What I foresee, relatively soon, is the capability of not having to charge your cell phone for an entire week, for example, because that energy will be produced by your movement," said Sepulveda, whose research is funded by the National Science Foundation.
The innovative process starts with a silicone wafer, which is then fabricated with several layers, or thin sheets, of environmentally friendly substances including silver, polyimide and polypropylene ferroelectret. Ions are added so that each layer in the device contains charged particles. Electrical energy is created when the device is compressed by human motion, or mechanical energy.
The completed device is called a biocompatible ferroelectret nanogenerator, or FENG. The device is as thin as a sheet of paper and can be adapted to many applications and sizes. The device used to power the LED lights was palm-sized, for example, while the device used to power the touch screen was as small as a finger.
Advantages such as being lightweight, flexible, biocompatible, scalable, low-cost and robust could make FENG "a promising and alternative method in the field of mechanical-energy harvesting" for many autonomous electronics such as wireless headsets, cell phones and other touch-screen devices, the study says.
Remarkably, the device also becomes more powerful when folded.
"Each time you fold it you are increasing exponentially the amount of voltage you are creating," Sepulveda said. "You can start with a large device, but when you fold it once, and again, and again, it's now much smaller and has more energy. Now it may be small enough to put in a specially made heel of your shoe so it creates power each time your heel strikes the ground."
Sepulveda and his team are developing technology that would transmit the power generated from the heel strike to, say, a wireless headset.

Detecting emotions with wireless signals

Measuring your heartbeat and breath, device can tell if you’re excited, happy, angry, or sad.


Researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed "EQ-Radio," a device that can detect a person's emotions using wireless signals. By measuring subtle changes in breathing and heart rhythms, EQ-Radio is 87 percent accurate at detecting if a person is excited, happy, angry or sad -- and can do so without on-body sensors.
MIT professor and project lead Dina Katabi envisions the system being used in entertainment, consumer behavior, and health care. Film studios and ad agencies could test viewers' reactions in real-time, while smart homes could use information about your mood to adjust the heating or suggest that you get some fresh air.
"Our work shows that wireless signals can capture information about human behavior that is not always visible to the naked eye," says Katabi, who co-wrote a paper on the topic with PhD students Mingmin Zhao and Fadel Adib. "We believe that our results could pave the way for future technologies that could help monitor and diagnose conditions like depression and anxiety."
EQ-Radio builds on Katabi's continued efforts to use wireless technology for measuring human behaviors such as breathing and falling. She says that she will incorporate emotion-detection into her spinoff company Emerald, which makes a device that is aimed at detecting and predicting falls among the elderly.
Using wireless signals reflected off people's bodies, the device measures heartbeats as accurately as an ECG monitor, with a margin of error of approximately 0.3 percent. It then studies the waveforms within each heartbeat to match a person's behavior to how they previously acted in one of the four emotion-states.
The team will present the work next month at the Association of Computing Machinery's International Conference on Mobile Computing and Networking (MobiCom).
How it works
Existing emotion-detection methods rely on audiovisual cues or on-body sensors, but there are downsides to both techniques. Facial expressions are famously unreliable, while on-body sensors such as chest bands and ECG monitors are inconvenient to wear and become inaccurate if they change position over time.
EQ-Radio instead sends wireless signals that reflect off of a person's body and back to the device. Its beat-extraction algorithms break the reflections into individual heartbeats and analyze the small variations in heartbeat intervals to determine their levels of arousal and positive affect.
These measurements are what allow EQ-Radio to detect emotion. For example, a person whose signals correlate to low arousal and negative affect is more likely to tagged as sad, while someone whose signals correlate to high arousal and positive affect would likely be tagged as excited.
The exact correlations vary from person to person, but are consistent enough that EQ-Radio could detect emotions with 70 percent accuracy even when it hadn't previously measured the target person's heartbeat.
"Just by knowing how people breathe and how their hearts beat in different emotional states, we can look at a random person's heartbeat and reliably detect their emotions," says Zhao. For the experiments, subjects used videos or music to recall a series of memories that each evoked one the four emotions, as well as a no-emotion baseline. Trained just on those five sets of two-minute videos, EQ-Radio could then accurately classify the person's behavior among the four emotions 87 percent of the time.
Compared with Microsoft's vision-based "Emotion API," which focuses on facial expressions, EQ-Radio was found to be significantly more accurate in detecting joy, sadness, and anger. The two systems performed similarly with neutral emotions, since a face's absence of emotion is generally easier to detect than its presence.
One of the CSAIL team's toughest challenges was to tune out irrelevant data. In order to get individual heartbeats, for example, the team had to dampen the breathing, since the distance that a person's chest moves from breathing is much greater than the distance that their heart moves to beat.
To do so, the team focused on wireless signals that are based on acceleration rather than distance traveled, since the rise and fall of the chest with each breath tends to be much more consistent -- and, therefore, have a lower acceleration -- than the motion of the heartbeat.
Although the focus on emotion-detection meant analyzing the time between heartbeats, the team says that the algorithm's ability to captured the heartbeat's entire waveform means that in the future it could be used for non-invasive health monitoring and diagnostic settings.
"By recovering measurements of the heart valves actually opening and closing at a millisecond time-scale, this system can literally detect if someone's heart skips a beat," says Adib. "This opens up the possibility of learning more about conditions like arrhythmia, and potentially exploring other medical applications that we haven't even thought of yet."