Home About Login Current Archives Announcements Editorial Board
Submit Now For Authors Call for Submissions Statistics Contact
Home > Archives > Volume 20, No 11 (2022) > Article

DOI: 10.14704/NQ.2022.20.11.NQ66027


Himani Dighorikar, Shridhar Ashtikar, Ishika Bajaj, Shivam Gupta, Dilipkumar A. Borikar


Predicting the next word has been one of the most important subjects of discussion in Natural Language Processing. Nowadays, instead of wasting time on writing everything and then proof reading it, we simply use Auto-complete. We use it every day but don’t give it the time to understand the processing. Natural language generation (NLG) focuses on generation of natural, human interpretable language. NLG is a methodology that allows us to predict the next word in a sentence most likely to be used. The concepts of Deep Learning such as Long Short-Term Memory (LSTM) and Recurrent Neural Networks (RNN) models are used for simplifying the process of writing and suggesting. This paper aims to provide the analytical study of text prediction and give a formal representation of how text can be correlated to produce a set of words that can be driven out from the data.


Deep Learning, Natural Language Processing, Recurrent neural network Long Short-Term Memory Natural language generation Bi-LSTM

Full Text