�
With Regards
Qasim
pk.linkedin.com/pub/qasim-sheikh/0/250/712
+923008540838 (mob)
Qasim
pk.linkedin.com/pub/qasim-sheikh/0/250/712
+923008540838 (mob)
----- Forwarded Message -----
From: Qasim Sheikh
To: Dr. Asad Naeem; Abdullah Sadiq ; Shiraz Shahid ; Fakhar Lodhi ; Qasim Sheikh
Sent: Monday, June 3, 2013 3:56 PM
Subject: This article is a treasure of ideas for FYP
Microphones as sensorsTeaching old microphones new tricksSensor technology: Microphones are designed to capture sound. But they turn out tobe able to capture other sorts of information, tooJun 1st 2013 | From the print editionMICROPHONES exist in many shapes and sizes,and work in many different ways. In the late 19thcentury, early telephones relied on carbonmicrophones, pioneered by Thomas Edison;today's smartphones contain tiny microphonesbased on micro-electro-mechanical systems,commonly called MEMS. Specialist microphonesabound in recording studios; others are used byspies. But whatever the technology, these microphones all do the same thing: they convertsound waves into an electrical signal.It turns out, however, that with the addition of suitable software, microphones can detect morethan mere audio signals. They can act as versatile sensors, capable of tuning into signals frominside the body, assessing the social environment and even tracking people's posture andgestures. Researchers have reimagined microphones as multi-talented collectors of information.And because they are built into smartphones that can be taken anywhere, and can acquire newabilities simply by downloading an app, they are being put to a range of unusual and beneficialuses.That natural microphone, the human ear, is finely attuned to picking up certain characteristicsin a person's voice. It is not hard, for instance, to infer from a slight change in pitch when afriend might be under stress. Tanzeem Choudhury of Cornell University and her research teamare building mobile-phone software that can be trained to do the same thing. That stress resultsin subtle changes in pitch, amplitude and frequency, as well as speaking rate, has been knownfor decades. But humans respond to stress in different ways and have different coping styles,says Dr Choudhury. So a one-size-fits-all smartphone app which analyses speech will notprovide an accurate assessment of whether someone is stressed or not.The sound of stressDr Choudhury's solution is an app called StressSense. Running on a standard Android-basedsmartphone, it listens for certain universal indicators of stress but is able, over time, to learn thespecifics of a particular user's voice. It is unobtrusive yet is also robust enough to work in noisyenvironments, which is crucial if it is to be of practical diagnostic use. StressSense does notactually record speech, Dr Choudhury emphasises, but simply captures and analysescharacteristics such as amplitude and frequency. In a paper published last year, the researchersconcluded that "it is feasible to implement a computationally demanding stress-classificationsystem on off-the-shelf smartphones". Their ultimate goal is to develop an app that can helpsomeone determine the links between irritating situations and subsequent responses. Yourphone might realise before you do, for example, that your 8am meetings are the cause of yourheadaches.StressSense is still in development. In the meantime, Dr Choudhury's team has launched anAndroid app called BeWell that focuses more on overall health by looking at three metrics:sleep, physical activity and social interaction. These three metrics, Dr Choudhury believes, areimportant yet easily measured indicators of someone's health. BeWell's sleep-tracking featureguesses whether the phone's user is awake or not by analysing usage, light and sound levels,and charging habits. Physical activity is monitored using built-in accelerometers for motiondetection. And social activity is measured chiefly by collecting snippets of sound that indicatethat the user is talking to someone, either in person or over the phone.Again, no actual words are stored, simply features of human speech, which the app candistinguish from background noise such as music or traffic. Certain changes in a person's socialinteractions—a sudden drop-off, for instance—can be indicative of health problems such asdepression, says Dr Choudhury. The idea is that the app could tip someone off to a change intheir behaviour that might otherwise have gone unnoticed.Microphones need not limit themselves to listening to the human voice, however. JohnStankovic of the University of Virginia in Charlottesville is using microphones to captureheartbeats. Researchers in his group are using earphones modified with accelerometers andadditional microphones that detect the pulse in arteries in the wearer's ear. This makes itpossible to collect information about the wearer's physical state, including heart rate andactivity level, which is transmitted to the smartphone via the audio jack. The researchers evencreated an app, called MusicalHeart, that analyses the wearer's heart rate and recommendssongs from a music library based on a heart-rate goal—faster to encourage a runner, or slowerto calm someone who is feeling nervous.Dr Stankovic says he is working withanaesthesiologists to develop a similar system thatcould use the calming effect of music to helppeople who are about to undergo surgery. Theaim, he says, is to explore the potential for using apulse-music feedback system to calm the heart,rather than simply resorting to drugs to do thejob. In a separate project, Dr Stankovic isdeveloping sound libraries that can be used toidentify and classify certain types of sounds, fromdirect indicators of depression in the human voiceto coughs, wheezes and lung function. Just as aphone can perform speech recognition, it mightalso be able to distinguish between healthy andunhealthy lung sounds. Dr Stankovic and histeam published details of their proposed"physiological acoustic anomaly detection service" in a paper in April.Working in a similar vein, Shwetak Patel, a researcher at the University of Washington inSeattle, has found a way to use a smartphone to measure lung function from a hearty blow onits microphone. He and his team have developed an iPhone app, called SpiroSmart, thatsimulates a digital spirometer, the device that measures the volume of air a person can expelfrom her lungs. Spirometers help doctors better understand the health status of patients withconditions like asthma, chronic obstructive pulmonary disease (COPD) and cystic fibrosis. Butclinical spirometers are expensive, costing thousands of dollars, which makes them impracticalfor widespread home use. Dr Patel and his collaborators reckon that if people who need onecould have a spirometer on their phones, they could better manage their conditions, and theirdoctors could spot problems earlier, in between check-ups.But in deciding to use the microphone on an iPhone, Dr Patel had to rethink the design of aspirometer, which uses small turbines to measure airflow. Instead of building a turbine add-onto the phone, his team developed software that listens for acoustic features, such as resonance,that result from air being expelled through the trachea and past the vocal cords. These features,he says, directly indicate the volume of air moved from the lungs. In a paper published lastyear, 52 patients compared the mobile-based spirometer with a clinical one. SpiroSmart's resultswere accurate to within 5% of those from the gold-standard clinical device. The app isundergoing clinical trials with patients diagnosed with COPD, cystic fibrosis and chronicasthma, and Dr Patel hopes to get Federal Drug Administration approval for SpiroSmart as amedical device by the end of the year.Another of Dr Patel's microphone-based projects uses sound to recognise hand-movements inthe air, just as touchscreens can distinguish between different gestures. The software, calledSoundWave, produces an inaudible, high-pitched tone from a loudspeaker that bounces off anearby object, such as a hand. The microphone picks up the reflected sound, and audioprocessing software can then detect different movements, such as waving your hand, wigglingyour fingers, or making two-handed gestures. It does this by measuring the slight shift infrequency that results when sound bounces off a moving object, known as the Doppler effect (afamiliar everyday example of which is the change in the pitch of an ambulance's siren as itpasses by). The advantage of this approach is that it allows existing devices (smartphones andlaptops, say) to detect gestures, such as flicking your hand upwards to scroll through adocument, without the need for any additional hardware.Keeping an ear on driversExpanding on this idea, Dr Patel is currentlydeveloping software that uses a smartphone'sexisting microphone and speakers and enables thedevice to detect its position in a car. Just likeSoundWave, it would produce inaudible tonesthat reflect off the car's interior. It might then bepossible to begin and end calls using gestures—orcould, Dr Patel suggests, form the basis of an optin service that locks the phone from the driver,keeping him from texting or making calls whiledriving.One drawback of using microphones for thesevarious kinds of sensing is that keeping amicrophone listening at all times, and runningsoftware to analyse what it hears, can consume alot of battery power. But a new trick devised byQualcomm, a maker of chips for mobile devices, may be able to help. Its Snapdragon 800processors have a feature called Snapdragon Voice Activation that can wake up a gadget fromstandby mode at the sound of a voice command. When no voices can be heard, the deviceremains in a battery-sipping slumber. As smartphones continue to be put to unexpected uses,the humble microphone is ready to play its part.--
With Regards
Qasim
pk.linkedin.com/pub/qasim-sheikh/0/250/712
+923008540838 (mob)
__._,_.___
| Reply via web post | Reply to sender | Reply to group | Start a New Topic | Messages in this topic (1) |
.
__,_._,___
No comments:
Post a Comment