Natural Language Processing(NLP) Heroes

Natural language processing has been one of the most sought-after fields in 2020. There has been a lot of progress with GPT-3 being released. Text analytics has become far easier to approach in comparison to previous years. Changes and improvements are happening all across this field. It is a task to keep track of the commendable work done around text analytics.

A lot of people are remembered for their commendable work in the field of Natural Language Processing. A few of them have been described below. Follow our current heroes of NLP to stay in touch with the new advancements in this arena.

Christopher D. Manning

Director of Stanford Artificial Intelligence Laboratory (SAIL)

Christopher D. Manning, Director of Stanford Artificial Intelligence Laboratory (SAIL)
Christopher D. Manning, Director of Stanford Artificial Intelligence Laboratory (SAIL)

Christopher Manning is our first hero in NLP that we are going to talk about. He is also the Associate Director of Stanford Institute for Human-Centered Artificial Intelligence, founder of the Stanford NLP Group, and also a professor of Computer Science and Linguistics at Stanford. His goal is to make computers understand human language with ease. He has researched various topics like tree recursive neural networks, the GloVe model of word vectors, sentiment analysis, neural network dependency parsing, neural machine translation, question answering, and deep language understanding. He is a leader in applying Deep Learning to Natural Language Processing. He is a course instructor for the famous CS224N: Natural Language Processing with Deep Learning. He has co-authored a few books which revolve around Introduction to Information Retrieval, Foundations of statistical natural language processing, etc.

You can follow him through the below links –

Kathleen McKeown

Professor of Computer Science and Director, Data Science Institute, Columbia University

Kathleen McKeown, Professor of Computer Science and Director, Data Science Institute, Columbia University

Kathleen R. McKeown is the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University and is also the Founding Director of the Data Science Institute at Columbia. She works in the field of natural language processing — text summarization, natural language generation, and analysis of social media. She has designed the Newsblaster multi-document summarization program to derive summary news stories from the contents of several news sites, which is also multi-lingual. She recently published a paper on electricity usage tracking and using reinforcement learning for the log messages to observe its behavior on usage. She has worked in the text summarization space by working on live updates in case of a disaster situation.

You can read more about her on the following links –

Quoc Le

Research Scientist at Google Brain

Quoc Le, Research Scientist at Google Brain
Quoc Le, Research Scientist at Google Brain

Quoc Le is a research scientist at Google. He is an AI natural and loves automating things. In his initial years, he worked on deep learning systems for image classification. Subsequently, he worked on the foundations of neural machine translation. From 2014 onwards he has worked towards automating machine learning(AutoML). The Google Cloud AutoML makes use of Le’s research work as its baseline. He has worked on the Neural Architecture Search method for making AutoML more robust. He has worked on various Machine Learning algorithms like gradient descent to make improvements to them. Subsequently, he has been working on several mechanisms to improve the AutoML process. He has a lot of publications on his name — most of them revolve around AutoML, sequence learning, and machine learning. His major research areas are — Algorithms and Theory, Machine Perception, Machine Intelligence, Machine Translation, NLP, and Speech Processing.

You can get to know more about him here –

Oren Etzioni

CEO of the Allen Institute for Artificial Intelligence(AIAI)

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence(AIAI)
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence(AIAI)

Oren is the CEO at AIAI. He is a Professor Emeritus at the University of Washington as of October 2020 and a Venture Partner at the Madrona Venture Group since 2000. His awards include Seattle’s Geek of the Year (2013), and he has founded or co-founded several companies, including Farecast (acquired by Microsoft). He has written over 100 technical papers and commentary on AI for The New York Times, Wired, and Nature. He helped pioneer meta-search, online comparison shopping, machine reading, and Open Information Extraction. He got into this field of information extraction very early and tackled niche problems. He has introduced a Semantic scholar tool that will summarize large textual PDF files. This tool was used to sort the COVID 19 literature that has been recently accumulated.

You can find more about him on the following links –

Dan Jurafsky

Professor of Computer Science at Stanford University

Dan Jurafsky, Professor of Computer Science at Stanford University
Dan Jurafsky, Professor of Computer Science at Stanford University

Dan Jurafsky is a Professor of Computer Science and Linguistics at Stanford University. He is the recipient of the 2002 MacArthur Fellowship. He has written a book titled “Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition”.He is known for developing the first automated system for Semantic role labeling with Daniel Gildea.

He has co-created with Chris Manning one of the first massively open online courses, Stanford’s course in Natural Language Processing. His research ranges widely across computational linguistics. His special interests include natural language understanding, human-human conversation, the relationship between human and machine processing, and the application of natural language processing to the social and behavioral sciences. His recent work focuses on using transfer learning to interpret MIDI music files and using LSTMs as the neural network models.

You can find him on the following links :

Thomas Wolf

Co-founder and Chief Science Officer at Hugging Face 🤗

Thomas Wolf, Co-founder and Chief Science Officer at Hugging Face 🤗
Thomas Wolf, Co-founder and Chief Science Officer at Hugging Face 🤗

HuggingFace is very popular in the Natural Language Processing space. Hence we have someone from the HuggingFace here. Well, Thomas Wolf is the CoFounder and Chief Science Officer at HuggingFace, a company that works on Natural Language Processing and Natural Language Generation. He is interested in Natural Language Processing, Deep Learning, and Computational Linguistics. HuggingFace Transformers package is a very useful python library that has pre-trained models for several natural language processing tasks.

He has a vast number of publications related to the various pre-trained models included in the HuggingFace library.

You can stay updated with his activities on the following links :

Hae Chang Rim

Professor at Korea University

Hae Chang Rim, Professor at Korea University
Hae Chang Rim, Professor at Korea University

Hae-Chang Rim is a professor in the department of computer science at Korea University. He received his Ph.D. degree in computer science from the University of Texas at Austin. He’s a member of the Association for Computational Linguistics. His research interests are natural language processing, Korean language engineering, and information retrieval. He has a lot of publications on natural language processing applications on mobile devices, information retrieval, quality of question — answering system, etc.

You can have a look at his profile and publications here –

Sebastian Ruder

Research Scientist at DeepMind

Sebastian Ruder, Research Scientist at DeepMind
Sebastian Ruder, Research Scientist at DeepMind

Sebastian Ruder is a Research Scientist in the language team at DeepMind. He blogs about Machine Learning, Deep Learning, and Natural Language Processing. He has worked on research theories around using transfer learning in NLP. He wants to make NLP more accessible to all the people in the world by accommodating various languages and not just English. His recent publications include Long Range Arena Transformers, Cross-Lingual Transfer, etc.

He releases a newsletter with all recent advances in machine learning, deep learning, and natural language processing.

You can follow him here:

Ilya Sutskever

Co-founder and Chief Scientist at OpenAI

Ilya Sutskever, Co-founder and Chief Scientist at OpenAI
Ilya Sutskever, Co-founder and Chief Scientist at OpenAI

Ilya Sutskever is the Co-Founder and Chief Scientist at OpenAI. Earlier, he was a Research Scientist at Google Brain. Ilya with Oriol Vinyals and Quoc Le, invented sequence learning. He was in Andrew Ng’s group during his Postdoc. He has a large number of research publications in sequence-to-sequence learning, generative modeling, DCNN, etc.

He is also the co-inventor of AlexNet, AlphaGO, and TensorFlow. He is someone who should be followed by anyone who is interested in the field of Data Science, not just NLP.

You can follow him here :

In the above article, we had a look at some of the heroes in natural language processing who are engaged in marvelous work. You can go through the links to follow them regularly and also learn about their publications and research.

Also, if you think we missed anyone then feel free to comment so that we can add them to the list.

If you like what we do and want to know more about our community 👥 then please consider sharing, following, and joining it. It is completely FREE.

Also, don’t forget to show your love ❤️ by clapping 👏 for this article and let us know your views 💬 in the comment.

Join here: https://blogs.colearninglounge.com/join-us