AI Speech & Language Processing Update

Al Gharakhanian
4 min readDec 21, 2019

Clearly last week’s most notable events in the world of AI were the NeurIPS conference held in Vancouver (see the segment below) as well as the acquisition of Habana Labs (Israeli Deep Learning hardware company) by Intel for $2B.

As for Natural Language Processing (NLP), more and more industries are finding great use cases that can benefit from NLP. I have included blurbs on a few upstarts that leverage NLP doing great things for the healthcare industry.

The domination of transformer-based language models is expanding and we are seeing smaller and more efficient implementations of the BERT — Bidirectional Encoder Representations from Transformers (Devlin et al., 2018) language model. Additionally, two remarkable reports covering the field of AI were published last week. The first report was published by Algorithmia and is titled “2020 state of enterprise machine learning”. The title of the second report is Artificial Intelligence Index Report 2019“ and is published by Human-Centered Artificial Intelligence — Stanford University.

A Lite BERT for Reducing Inference Time

Despite their impressive performance, it is a challenge to work with transformer-based language models such as BERT (Devlin et al., 2018). The extent of computational resources and memory footprint required both for training and inference can be overwhelming. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (Lan et al., 2019) is a new implementation which has merely 4.7% to 18% of the traditional BERT (Devlin et al., 2018) parameters and its training speed is about 1.7x faster than the traditional BERT. Remarkable accomplishment.

A Lite BERT for Reducing Inference Time — Towards AI — Medium

MEDIUM.COMShare

BERT (Devlin et al., 2018) achieved lots of state-of-the-art results in 2018. However, it is not easy to use BERT (Devlin et al., 2018) in production even small footprint experiments. The base…

Google T5 Explores the Limits of Transfer Learning — SyncedReview — Medium

MEDIUM.COMShare

Synced invited Samuel R. Bowman, an Assistant Professor at New York University who works on artificial neural network models for natural language understanding, to share his thoughts on T5.

Develop Smaller Speech Recognition Models with NVIDIA’s NeMo Framework

DEVBLOGS.NVIDIA.COMShare

Clearly AI-based Automatic Speech Recognition Systems (ASR) are here to stay and they will find homes in a plethora of applications in the year to come. Most of these applications will be at the edge of the network where computational resources and memory footprint are scarce commodities. NVIDIA just released a new ASR model that has produced state-of-the-art results but more importantly has done so with far fewer parameters (19M vs. 23M for a competing model called Multi-Stream Self-Attention)…

Utilization of Language Processing in Medicine

AI-based Natural Language Processing (NLP) has impacted many industries including voice-assistants, customer service chatbots, finance, and document processing among many others. The field of medicine has also been a big beneficiary of the technology. I came across the following companies that have put NLP to a good use and are doing very neat things making a real impact on patient care. The following are just a few of them:

1. Roam Analytics

The company uses NLP to manage patient’s electronic health records. Their platform enables providers to streamline data analysis and storage.

2. Appto

Appto’s platform solution automates manual tasks like examining X-Rays, tests, CT scans, and data entry which leads to elimination of human errors. Additionally they offer an interactive app intended to respond to patient inquiries.

3. Sense.ly

Sense.ly’s AI-based virtual patient assistant. NLP algorithms are used to efficiently monitor, understand patients queries, and gather feedback. Their platform is able to provide guidance to practitioners aiding them make better treatment suggestions.

NeurIPS 2019

Last week Vancouver was the the host of NeurIPS 2019 (Conference on Neural Information Processing) which is the largest and most prestigious AI conference. While a ton of papers were presented the following NLP-centric work caught my eye:

1. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

2. Text-Based Interactive Recommendation via Constraint-Augmented Reinforcement Learning

3. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks

4. A Tensorized Transformer for Language Modeling

5. Drill-down: Interactive Retrieval of Complex Scenes using Natural Language Queries

6. SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

7. Training Language GANs from Scratch

8. Ouroboros: On Accelerating Training of Transformer-Based Language Models

9. XLNet: Generalized Autoregressive Pretraining for Language Understanding

10. Cross-lingual Language Model Pretraining

11. Unified Language Model Pre-training for Natural Language Understanding and Generation

12. Inducing brain-relevant bias in natural language processing models

13. Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)

14. Can Unconditional Language Models Recover Arbitrary Sentences?

15. Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics

=======================================

Al Gharakhanian

info@cogneefy.com | www | Linkedin | blog | Twitter

--

--