Cancer is debilitating enough on its own. A host of common side effects and potential complications only makes things worse.
When Dr. Bernice Kwong realized that many patients at her supportive oncology clinic regularly visited online forums seeking information on and advice for treatment-caused conditions like hair loss and skin rashes, she wondered if there were any way physicians could use the wealth of data on those networks to more quickly discover potential adverse drug reactions.
The study she designed with six co-authors employed a tool that teased out cognitive relationships from patients’ online testimonials using natural language processing (NLP), a subcategory of artificial intelligence in which computers are pushed to analyze and “understand” large amounts of spoken or written language. Called DeepHealthMiner, the tool analyzed millions of posts from the Inspire health forum and yielded promising results.
Dr. Kavita Sarin, an assistant professor of dermatology at Stanford University Medical Center and one of the study's co-authors, told Built In that besides detecting common drug reactions associated with various cancer medications, "we could actually detect them on online health forums earlier than they were published in medical literature."
Because the NLP-analyzed forums reflected understood drug-reaction associations, the study helped confirm the power of technology to monitor drug safety. It also uncovered a “rare, missed adverse drug reaction” that was hiding in plain sight on cancer-support message boards for more than a decade: loss of sweating.
While the condition can be serious, Sarin said, it's not often obvious.
"A single institution or single physician will not see enough patients to actually be able to detect that that’s a significant adverse reaction.”
While the study merely helped establish the efficacy of NLP in gathering and analyzing health data, its impact could prove far greater if the U.S. healthcare industry moves more seriously toward the wider sharing of patient information.
“If the United States had a broader electronic medical records sharing, it would open the avenue for natural language processing and deep learning on those medical records,” Sarin said.
- Predictive text
- Development of medicines
- Email spam filtering
- Text translation
- Automated phone answering
- Summarizing academic papers
- Self-learning search engine
- Smart assistants
For now, though, drug side effects often go undetected until after the medication has gone to market. In fact, nearly a third of drugs approved by the FDA between 2001 and 2010 and made available to the public were discovered to have major safety issues, according to researchers at the Yale School of Medicine. Applied to large datasets of medical testimony, NLP could help solve that problem — and unlock potentially major quality-of-life discoveries in the process.
From translation and order processing to employee recruitment and text summarization, here are seven more natural language processing examples and applications across an array of industries.
1. Streamlining patient information
How it’s using natural language processing: Dubbed “the Amazon Prime of primary care,” this rising healthtech company — led by the founder of Cozi — essentially puts a more tech-savvy spin on the increasingly popular telehealth approach. Here’s how it works: Patients sign up for an annual subscription and are connected to primary care doctors via the app, through which they can text one-on-one about symptoms and treatment options. Before chatting directly with a physician, users give details about their health history and condition to 98point6’s automated assistant, an NLP-powered tool that streamlines pertinent info not unlike what a nurse does a traditional doctor’s visit.
2. Text & media translation
How it’s using natural language processing: Translation company Welocalize, a Google partner, customizes the tech giant's AutoML Translate to make sure client content isn't lost in translation. This type of natural language processing is facilitating far wider content translation of not just text, but also video, audio, graphics and other digital assets.
3. Order processing
How it’s using natural language processing: If you've ever stood waiting at a restaurant counter while a harried cashier took a phone order from someone who wasn't even in line, you know that peculiar hybrid feeling of empathy and impatience — empathy because she probably had to take that order, impatience because you've been patiently waiting in the flesh. Kea aims to alleviate your empatience by helping quick-service restaurants retain revenue that's typically lost when the phone rings and rings while on-site patrons are tended to. The company’s Voice AI uses natural language processing to answer calls and take orders while also providing opportunities for restaurants to bundle menu items into meal packages and compile data that will enhance order-specific recommendations.
4. The chatbot recruiter
Location: Mountain View, Calif.
How it’s using natural language processing: Like 98point6, employee-recruitment software developer AllyO uses NLP-fueled chatbot technology in a more advanced way than, say, a standard-issue customer assistance bot. In this case, the bot is an “autonomous recruiter” that initializes the preliminary job interview process, gathering candidate info like location, contact info, skill set, experience level and employment eligibility — all while trying to field any concerns from the candidate. “If I’m an applicant, I have questions for the company as well … so there is some turn-taking involved. And you have to be able to follow the flow of a conversation even when it doesn’t stick from one task to the next,” Kathleen Preddy, a linguistics engineer with AllyO, said in a video discussion.
5. Summarizing text in academic papers
How it’s using natural language processing: Anyone who needs to read academic papers for professional or academic purposes has a couple of problems: Not only are those things difficult to penetrate, they're numerous. Paper Digest’s NLP tool plays the part of speed reader and judicious editor by laymanizing a paper’s jargon and presenting a concise, bulleted summary. According to co-founder Cristian Mejia, it takes about 30 minutes for the average American faculty to consume an academic text, which translates to a little over 20 papers consumed each month. That's a pittance compared to the total number of scholarly articles published each year, which some estimates put as high as 2.5 million.
6. Conversational speech synthesis technology
Location: Berkely, Calif.
How it’s using natural language processing: In How to Wreck a Nice Beach, a chronicle of the vocoder’s unlikely journey from voice encryption tool to musical instrument, President Lyndon B. Johnson is said to have hurled his vocoder-connected headset across Air Force One and shouted, “When I talk to the Secretary of State, he better sound like the Secretary of State!” Needless to say, speech synthesis technology has witnessed a few upgrades since those early development stages. The most prominent example is the smart speaker — nearly a quarter of American households now have an Amazon Echo, Google Home or an equivalent device. AI-powered personal assistants — think Amazon Alexa, Siri or Microsoft’s Cortana — are central to their operation. The assistants, in turn, rely on natural language processing — particularly the kind that, as Semantic Machines illustrates on its site, fall on the right side of the command-versus-conversation distinction. Setting an alarm by voice is a command; a conversation is far more sophisticated. In accomplishing the latter, Semantic Machines’ engine interprets a user’s intent, reads the context of an interaction, and updates its own framework to communicate more effectively.
7. Self-learning search
Location: Mountain View, Calif.
How it’s using natural language processing: Poor search function is a surefire way to boost your bounce rate, which is why self-learning search is a must for major e-commerce players. Several prominent clothing retailers, including Neiman Marcus, Forever 21 and Carhartt, incorporate BloomReach’s flagship product, BloomReach Experience (brX). The suite includes a self-learning search and optimizable browsing functions and landing pages, all of which are driven by natural language processing.
Drawback and limitations of natural language processing
Applications that learn and adapt from interactions with human beings face a tricky challenge: human beings. We can be fickle and random, and some of us are outright saboteurs. The most infamous example of the latter is certainly the public reception of Microsoft’s AI chatbot Tay. Unveiled on Twitter and a few messaging apps in March of 2016, it took all of one day of human interaction for Tay to sink from sunny optimist (sample tweet: “humans are super cool”) to a spewer of hateful vitriol. Mama always said: Ugly in is ugly out.
There’s also some evidence that so-called "recommender systems," which are often assisted by NLP technology, may exacerbate the digital siloing effect.
“The decisions made by these systems can influence user beliefs and preferences, which in turn affect the feedback the learning system receives — thus creating a feedback loop,” researchers for Deep Mind wrote in a recent study. And like that, another echo chamber is born.
One of NLP's most obvious limitations is also frequent among humans: missing the point. Gone are the days of interfaces that literalize colloquial phrases like “he’s on fire” or (eww) “pick your brain,” but getting machines and humans on the same idiosyncratic wavelength isn't easy. Take Shakespeare, for example. Microsoft recently ran nearly 20 of the Bard’s plays through its Text Analytics API. The application charted emotional extremities in lines of dialogue throughout the tragedy and comedy datasets. Unfortunately, the machine reader sometimes had a trouble deciphering comic from tragic — and not in a “don’t know whether to laugh or cry” way.
“The algorithm couldn’t work out whether Hamlet’s mad ravings were real or imagined, whether characters were being deceptive or telling the truth,” a Microsoft reporter wrote. “That meant that the AI labelled events as positive when they were negative, and vice-versa. The AI believed The Comedy of Errors was a tragedy because of the physical, slapstick moments in the play.”
The use of NLP, particularly on a large scale, also has attendant privacy issues. For instance, researchers in the above mentioned Stanford study looked at only public posts with no personal identifiers, according to Sarin, but other parties might not be so ethical. And though increased sharing and AI analysis of medical data could have major public health benefits, patients have little ability to share their medical information in a broader repository.
From the perspective of job security, of course, some NLP shortcomings can seem like saving graces — reminders that even the most advanced chatbot can't render a medical diagnosis, that the most accurate language translation requires human eyes and that Shakespeare scholars won’t soon be automated out of academia.
Which isn't to negate the impact of natural language processing. More than a mere tool of convenience, it's driving serious technological breakthroughs.
As Sarin said of her NLP-powered research, “It’s really just the tip of the iceberg.”
Images via Shutterstock, social media and company websites.