Testing the NLTK classifier on specific file

From OnnoWiki
Jump to navigation Jump to search


The following code run Naive Bayes movie review classifier. The code generate a list of the most informative features.

Note: **movie review** folder is in the nltk.

from itertools import chain from nltk.corpus import stopwords from nltk.probability import FreqDist from nltk.classify import NaiveBayesClassifier from nltk.corpus import movie_reviews stop = stopwords.words('english')

documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]


word_features = FreqDist(chain(*[i for i,j in documents])) word_features = word_features.keys()[:100]

numtrain = int(len(documents) * 90 / 100) train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]] test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]

classifier = NaiveBayesClassifier.train(train_set) print nltk.classify.accuracy(classifier, test_set) classifier.show_most_informative_features(5)

link of code from alvas

how can I test the classifier on specific file?

Please let me know if my question is ambiguous or wrong. python-2.7 nlp classification nltk text-classification shareimprove this question

edited Mar 29 '15 at 12:03 alvas 22.2k39142279

asked Mar 27 '15 at 13:34 ZaM 578

add a comment 2 Answers active oldest votes up vote 7 down vote accepted


First, read these answers carefully, they contain parts of the answers you require and also briefly explains what the classifier does and how it works in NLTK:

   nltk NaiveBayesClassifier training for sentiment analysis
   Using my own corpus instead of movie_reviews corpus for Classification in NLTK
   http://www.nltk.org/book/ch06.html

Testing classifier on annotated data

Now to answer your question. We assume that your question is a follow-up of this question: Using my own corpus instead of movie_reviews corpus for Classification in NLTK

If your test text is structured the same way as the movie_review corpus, then you can simply read the test data as you would for the training data:

Just in case the explanation of the code is unclear, here's a walkthrough:

traindir = '/home/alvas/my_movie_reviews' mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

The two lines above is to read a directory my_movie_reviews with such a structure:

\my_movie_reviews

   \pos
       123.txt
       234.txt
   \neg
       456.txt
       789.txt
   README

Then the next line extracts documents with its pos/neg tag that's part of the directory structure.

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

Here's the explanation for the above line:

  1. This extracts the pos/neg tag

labels = [i for i.split('/')[0]) for i in mr.fileids()]

  1. Reads the words from the corpus through the CategorizedPlaintextCorpusReader object

words = [w for w in mr.words(i)]

  1. Removes the stopwords

words = [w for w in mr.words(i) if w.lower() not in stop]

  1. Removes the punctuation

words = [w for w in mr.words(i) w not in string.punctuation]

  1. Removes the stopwords and punctuations

words = [w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation]

  1. Removes the stopwords and punctuations and put them in a tuple with the pos/neg labels

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

The SAME process should be applied when you read the test data!!!

Now to the feature processing:

The following lines extra top 100 features for the classifier:

  1. Extract the words features and put them into FreqDist
  2. object which records the no. of times each unique word occurs

word_features = FreqDist(chain(*[i for i,j in documents]))

  1. Cuts the FreqDist to the top 100 words in terms of their counts.

word_features = word_features.keys()[:100]

Next to processing the documents into classify-able format:

  1. Splits the training data into training size and testing size

numtrain = int(len(documents) * 90 / 100)

  1. Process the documents for training data

train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[:numtrain]]

  1. Process the documents for testing data

test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents[numtrain:]]

Now to explain that long list comprehension for train_set and `test_set:

  1. Take the first `numtrain` no. of documents
  2. as training documents

train_docs = documents[:numtrain]

  1. Takes the rest of the documents as test documents.

test_docs = documents[numtrain:]

  1. These extract the feature sets for the classifier
  2. please look at the full explanation on http://stackoverflow.com/questions/20827741/nltk-naivebayesclassifier-training-for-sentiment-analysis/

train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in train_docs]

You need to process the documents as above for the feature extractions in the test documents too!!!

So here's how you can read the test data:

stop = stopwords.words('english')

  1. Reads the training data.

traindir = '/home/alvas/my_movie_reviews' mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

  1. Converts training data into tuples of [(words,label), ...]

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

  1. Now do the same for the testing data.

testdir = '/home/alvas/test_reviews' mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

  1. Converts testing data into tuples of [(words,label), ...]

test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]

Then continue with the processing steps described above, and simply do this to get the label for the test document as @yvespeirsman answered:

        1. FOR TRAINING DATA ####

stop = stopwords.words('english')

  1. Reads the training data.

traindir = '/home/alvas/my_movie_reviews' mr = CategorizedPlaintextCorpusReader(traindir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

  1. Converts training data into tuples of [(words,label), ...]

documents = [([w for w in mr.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr.fileids()]

  1. Extract training features.

word_features = FreqDist(chain(*[i for i,j in documents])) word_features = word_features.keys()[:100]

  1. Assuming that you're using full data set
  2. since your test set is different.

train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]

        1. TRAINS THE TAGGER ####
  1. Train the tagger

classifier = NaiveBayesClassifier.train(train_set)

        1. FOR TESTING DATA ####
  1. Now do the same reading and processing for the testing data.

testdir = '/home/alvas/test_reviews' mr_test = CategorizedPlaintextCorpusReader(testdir, r'(?!\.).*\.txt', cat_pattern=r'(neg|pos)/.*', encoding='ascii')

  1. Converts testing data into tuples of [(words,label), ...]

test_documents = [([w for w in mr_test.words(i) if w.lower() not in stop and w not in string.punctuation], i.split('/')[0]) for i in mr_test.fileids()]

  1. Reads test data into features:

test_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in test_documents]

        1. Evaluate the classifier ####

for doc, gold_label in test_set:

   tagged_label = classifier.classify(doc)
   if tagged_label == gold_label:
       print("Woohoo, correct")
   else:
       print("Boohoo, wrong")

If the above code and explanation makes no sense to you, then you MUST read this tutorial before proceeding: http://www.nltk.org/howto/classify.html

Now let's say you have no annotation in your test data, i.e. your test.txt is not in the directory structure like the movie_review and just a plain textfile:

\test_movie_reviews

   \1.txt
   \2.txt

Then there's no point in reading it into a categorized corpus, you can simply do read and tag the documents, i.e.:

for infile in os.listdir(`test_movie_reviews):

 for line in open(infile, 'r'):
      tagged_label = classifier.classify(doc)

BUT you CANNOT evaluate the results without annotation, so you can't check the tag if the if-else, also you need to tokenize your text if you're not using the CategorizedPlaintextCorpusReader.

If you just want to tag a plaintext file test.txt:

import string from itertools import chain from nltk.corpus import stopwords from nltk.probability import FreqDist from nltk.classify import NaiveBayesClassifier from nltk.corpus import movie_reviews from nltk import word_tokenize

stop = stopwords.words('english')

  1. Extracts the documents.

documents = [([w for w in movie_reviews.words(i) if w.lower() not in stop and w.lower() not in string.punctuation], i.split('/')[0]) for i in movie_reviews.fileids()]

  1. Extract the features.

word_features = FreqDist(chain(*[i for i,j in documents])) word_features = word_features.keys()[:100]

  1. Converts documents to features.

train_set = [({i:(i in tokens) for i in word_features}, tag) for tokens,tag in documents]

  1. Train the classifier.

classifier = NaiveBayesClassifier.train(train_set)

  1. Tag the test file.

with open('test.txt', 'r') as fin:

   for test_sentence in fin:
       # Tokenize the line.
       doc = word_tokenize(test_sentence.lower())
       featurized_doc = {i:(i in doc) for i in word_features}
       tagged_label = classifier.classify(featurized_doc)
       print(tagged_label)

Once again, please don't just copy and paste the solution and try to understand why and how it works. shareimprove this answer

edited Mar 31 '15 at 5:53

answered Mar 29 '15 at 11:10 alvas 22.2k39142279

1

There are many reasons and it's not perfect, maybe (i) the data is insufficient, (ii) features are not good enough, (iii) classifier choice, etc. Do take this course coursera.org/course/ml for more info. And if you can i strongly encourage you to attend lxmls.it.pt/2015 – alvas Mar 30 '15 at 7:56 2

You evaluate the output by finding out how often it is correct. Classifiers learn which features to pay attention to, and how to combine them in making their decision. There's no logical rule, it's all statistics and weights. Your file cv081.txt comes out as pos with your feature set -- what else is there to understand? – alexis Mar 30 '15 at 21:42 1

Go through the machine learning course on the coursea link and you will understand why and how the classifier works. I started out using them as black boxes and once you understand how they produce the annotations, it's easier to code and appreciate their elegance. – alvas Mar 31 '15 at 5:16 1

The first case is when you have annotated data to test on, the second is when you have none. If you need us to validate the code's output, can you post the full dataset up somewhere so that we can test (when we're free)? – alvas Mar 31 '15 at 5:20 1

go through this nltk.org/book/ch06.html, no pain no gain ;) – alvas Mar 31 '15 at 5:51 show 4 more comments up vote 4 down vote


You can test on one file with classifier.classify(). This method takes as its input a dictionary with the features as its keys, and True or False as their values, depending on whether the feature occurs in the document or not. It outputs the most probable label for the file, according to the classifier. You can then compare this label with the correct label for the file to see if the classification is correct.

In your training and test sets, the feature dictionaries are always the first item in the tuples, the labels are the second item in the tuples.

Thus, you can classify the first document in the test set like so:

(my_document, my_label) = test_set[0] if classifier.classify(my_document) == my_label:

   print "correct!"

else:

   print "incorrect!"

shareimprove this answer

edited Mar 29 '15 at 3:31

answered Mar 29 '15 at 3:19 yvespeirsman 1,717716


Could you please show me with a complete example and if possible your example be according to my example in question. I'm so new in Python. Could you please tell me why you write 0 in test_set[0] – ZaM Mar 29 '15 at 6:39 2

This is a complete example: if you paste the code immediately after your code in the question, it will work. The 0 simply takes the first document in your test set (the first item in a list has index 0). – yvespeirsman Mar 29 '15 at 7:18


Thank you so much. Is there a way to write the name_of_file instead of 0 in test_set[0]? I don't know, the test_set exactly indicate to which file since we have 2 folder pos|neg and every folder has its files. I ask this because the most informative word was bad (the result of my example in question). The first file has more than 1 hundred of ‘bad’ word. But the program show incorrect in the output. Where is my mistake? – ZaM Mar 29 '15 at 8:37


First, test_set doesn't contain the filenames, so if you want to use that to identify a file, one way would be to read the file directly and pass it to the classifier as the feature dictionary I described above. Second, your current classifier uses binary features. It simply checks whether a word occurs in a document or not, but ignores the frequency with which the word occurs. That's probably why it misclassifies a file with many occurrences of bad. – yvespeirsman Mar 29 '15 at 9:02




Referensi