I got an PowerIterationFailedConvergence:(PowerIterationFailedConvergence(...), 'power iteration failed to converge within 100 iterations') when I tried to summ
I have some custom data I want to use to further pre-train the BERT model. I’ve tried the two following approaches so far: Starting with a pre-trained BER
I success with English python -m spacy download en_core_web_lg python -m spacy download en_core_web_sm python -m spacy download en I read https://spacy.io/mod
What does downstream tasks terminology mean in NLP? I saw this terminology used in several articles but I can't understand the idea behind it.
text='Alice is a student.She likes studying.Teachers are giving a lot of homewok.' I am trying to get topics from a simple text(like above) with coherance scor
So about a week ago I posted this question: Issues running a Keras model with custom layers. The suggestion there was to try to make this question smaller and t
I have a text file which contains lines as shown below: Electronically signed : Wes Scott, M.D.; Jun 26 2010 11:10AM CST The patient was referred by Dr. J
I am using some text for some NLP analyses. I have cleaned the text taking steps to remove non-alphanumeric characters, blanks, duplicate words and stopwords, a
I am trying to train a custom ner model using spacy. Currently, I have more than 2k records for training and each text consists of more than 100 words, at least
I have an HTML document and I'd like to tokenize it using spaCy while keeping HTML tags as a single token. Here's my code: import spacy from spacy.symbols impo
I want to implement character-level embedding. This is usual word embedding. Word Embedding Input: [ [‘who’, ‘is’, ‘this&rsquo
I am writing code inspired from https://www.tensorflow.org/addons/api_docs/python/tfa/seq2seq/BasicDecoder. In the translation/generation we instantiate a Basic
I have built a BiLSTM model with an attention layer for sentence classification task but I am getting an error that my assertion has failed due to mismatch in n
If the question seems to dumb, it is because I am new to TensorFlow. I was implementing a toy endocer-decoder problem using TensorFlow 2’s TFA seq2seq imp
Working in R. I know the pre-trained GloVe embeddings (e.g., "glove.6B.50d.txt") can be found here: https://nlp.stanford.edu/projects/glove/. However, I've had
My dataset is only 10 thousand sentences. I run it in batches of 100, and clear the memory on each run. I manually slice the sentences to only 50 characters. Af
I was trying to build a model with the Sequential API (it has already worked for me with the Functional API). Here is the model that I'm trying to built in Sequ
I had this error while using AraBERT, from arabert.preprocess import ArabertPreprocessor model_name = "bert-base-arabertv2" arabert_prep = ArabertPreprocessor(
Let's suppose we have labeled data for text classification in a nice CSV file. We have 2 columns - "text" and "label". I am kind of struggling to understand spa
I am trying to extract relation between two entities (entity1- relation- entity2) from news articles for stock prediction. I have used NER for entity extraction