Difference between revisions of "NLTK: Sentiment Analysis dengan Python"
Onnowpurbo (talk | contribs) |
Onnowpurbo (talk | contribs) |
||
(One intermediate revision by the same user not shown) | |||
Line 8: | Line 8: | ||
We also met with Christopher Potts, a professor of linguistics here at Stanford. Prior to meeting with him, we consulted his sentiment analysis guide extensively and found it incredibly useful. We had a fantastic chat with Professor Potts and he helped us grasp some of the concepts we were working on. | We also met with Christopher Potts, a professor of linguistics here at Stanford. Prior to meeting with him, we consulted his sentiment analysis guide extensively and found it incredibly useful. We had a fantastic chat with Professor Potts and he helped us grasp some of the concepts we were working on. | ||
− | If you’d like to jump straight to seeing the full code, you can head over to the GitHub repository. | + | If you’d like to jump straight to seeing the full code, you can head over to the GitHub repository https://raw.githubusercontent.com/abromberg/sentiment_analysis_python/master/sentiment_analysis.py |
+ | |||
==The Setup== | ==The Setup== | ||
Line 23: | Line 24: | ||
As an aside, here are the imports we used for the project, so I won’t have to reference them again: | As an aside, here are the imports we used for the project, so I won’t have to reference them again: | ||
− | + | import re, math, collections, itertools | |
− | + | import nltk, nltk.classify.util, nltk.metrics | |
− | + | from nltk.classify import NaiveBayesClassifier | |
− | + | from nltk.metrics import BigramAssocMeasures | |
− | + | from nltk.probability import FreqDist, ConditionalFreqDist | |
− | |||
− | |||
− | |||
− | import re, math, collections, itertools | ||
− | import nltk, nltk.classify.util, nltk.metrics | ||
− | from nltk.classify import NaiveBayesClassifier | ||
− | from nltk.metrics import BigramAssocMeasures | ||
− | from nltk.probability import FreqDist, ConditionalFreqDist | ||
We began to write our feature evaluator (again, many thanks to StreamHacker.): | We began to write our feature evaluator (again, many thanks to StreamHacker.): | ||
− | + | def evaluate_features(feature_select): | |
− | + | #reading pre-labeled input and splitting into lines | |
− | + | posSentences = open('polarityData\\rt-polaritydata\\rt-polarity-pos.txt', 'r') | |
− | + | negSentences = open('polarityData\\rt-polaritydata\\rt-polarity-neg.txt', 'r') | |
− | + | posSentences = re.split(r'\n', posSentences.read()) | |
− | + | negSentences = re.split(r'\n', negSentences.read()) | |
− | + | ||
− | + | posFeatures = [] | |
− | + | negFeatures = [] | |
− | + | #http://stackoverflow.com/questions/367155/splitting-a-string-into-words-and-punctuation | |
− | + | #breaks up the sentences into lists of individual words (as selected by the input mechanism) and appends 'pos' or 'neg' after each list | |
− | + | for i in posSentences: | |
− | + | posWords = re.findall(r"[\w']+|[.,!?;]", i) | |
− | + | posWords = [feature_select(posWords), 'pos'] | |
− | + | posFeatures.append(posWords) | |
− | + | for i in negSentences: | |
− | + | negWords = re.findall(r"[\w']+|[.,!?;]", i) | |
− | + | negWords = [feature_select(negWords), 'neg'] | |
− | + | negFeatures.append(negWords) | |
− | |||
− | |||
− | |||
− | def evaluate_features(feature_select): | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
This is the same polarity data that was used in my previous post, so check that out if you’re curious about the data. Essentially what that block of code does is splits up the reviews by line and then builds a posFeatures variable which contains the output of our feature selection mechanism (we’ll see how that works in a minute) with ‘pos’ or ‘neg’ appended to it, depending on whether the review it is drawing from is positive or negative. | This is the same polarity data that was used in my previous post, so check that out if you’re curious about the data. Essentially what that block of code does is splits up the reviews by line and then builds a posFeatures variable which contains the output of our feature selection mechanism (we’ll see how that works in a minute) with ‘pos’ or ‘neg’ appended to it, depending on whether the review it is drawing from is positive or negative. | ||
Line 85: | Line 56: | ||
The next bit of code separates the data into training and testing data for a Naive Bayes classifier, which is the same type of classifier I used before. | The next bit of code separates the data into training and testing data for a Naive Bayes classifier, which is the same type of classifier I used before. | ||
− | + | #selects 3/4 of the features to be used for training and 1/4 to be used for testing | |
− | + | posCutoff = int(math.floor(len(posFeatures)*3/4)) | |
− | + | negCutoff = int(math.floor(len(negFeatures)*3/4)) | |
− | + | trainFeatures = posFeatures[:posCutoff] + negFeatures[:negCutoff] | |
− | + | testFeatures = posFeatures[posCutoff:] + negFeatures[negCutoff:] | |
− | |||
− | |||
− | |||
− | #selects 3/4 of the features to be used for training and 1/4 to be used for testing | ||
− | |||
− | |||
− | |||
− | |||
Now, thanks to NLTK, I can very simply train my classifier: | Now, thanks to NLTK, I can very simply train my classifier: | ||
− | + | classifier = NaiveBayesClassifier.train(trainFeatures) | |
− | |||
− | |||
− | |||
− | classifier = NaiveBayesClassifier.train(trainFeatures) | ||
Pretty cool, huh? The last thing this function needs to do is check how well the classifier does when it tries to classify the testing data. This code is a little challenging so I’ll walk through it thoroughly. | Pretty cool, huh? The last thing this function needs to do is check how well the classifier does when it tries to classify the testing data. This code is a little challenging so I’ll walk through it thoroughly. | ||
Line 115: | Line 74: | ||
I add the ‘i’ (the unique identifier) to the correct bin in referenceSets. I then predict the label based on the features using the trained classifier and put the unique identifier in the predicted bin in testSets. | I add the ‘i’ (the unique identifier) to the correct bin in referenceSets. I then predict the label based on the features using the trained classifier and put the unique identifier in the predicted bin in testSets. | ||
− | + | for i, (features, label) in enumerate(testFeatures): | |
− | + | referenceSets[label].add(i) | |
− | + | predicted = classifier.classify(features) | |
− | + | testSets[predicted].add(i) | |
− | |||
− | |||
− | |||
− | for i, (features, label) in enumerate(testFeatures): | ||
− | |||
− | |||
− | |||
This gives me a big list of identifiers in referenceSets[‘pos’], which are the reviews known to be positive (and the same for the negative reviews). It also gives me a list of identifiers in testSets[‘pos’], which are the reviews predicted to be positive (and similarly for predicted negatives). What this allows me to do is to compare these lists and see how well the predictor did. Here’s where one of the StreamHacker articles (on precision and recall) really helped. | This gives me a big list of identifiers in referenceSets[‘pos’], which are the reviews known to be positive (and the same for the negative reviews). It also gives me a list of identifiers in testSets[‘pos’], which are the reviews predicted to be positive (and similarly for predicted negatives). What this allows me to do is to compare these lists and see how well the predictor did. Here’s where one of the StreamHacker articles (on precision and recall) really helped. | ||
Line 131: | Line 83: | ||
The essence of those two terms is that precision is a measure of false positives — a higher precision means fewer reviews that aren’t in the desired label get labeled as being in there. A high recall means fewer reviews that are in the desired label get put in the wrong level. As you can imagine, these metrics correlate very closely. Here’s the code (again from the NLTK library) to print out the positive and negative recall and precision, as well as the accuracy (a less-specific measure just showing what percentage the classifier got right). NLTK also has a cool function that shows the features (words) that were most helpful to the classifier in determining whether a review was positive or negative. So here’s the code: | The essence of those two terms is that precision is a measure of false positives — a higher precision means fewer reviews that aren’t in the desired label get labeled as being in there. A high recall means fewer reviews that are in the desired label get put in the wrong level. As you can imagine, these metrics correlate very closely. Here’s the code (again from the NLTK library) to print out the positive and negative recall and precision, as well as the accuracy (a less-specific measure just showing what percentage the classifier got right). NLTK also has a cool function that shows the features (words) that were most helpful to the classifier in determining whether a review was positive or negative. So here’s the code: | ||
− | + | print 'train on %d instances, test on %d instances' % (len(trainFeatures), len(testFeatures)) | |
− | + | print 'accuracy:', nltk.classify.util.accuracy(classifier, testFeatures) | |
− | + | print 'pos precision:', nltk.metrics.precision(referenceSets['pos'], testSets['pos']) | |
− | + | print 'pos recall:', nltk.metrics.recall(referenceSets['pos'], testSets['pos']) | |
− | + | print 'neg precision:', nltk.metrics.precision(referenceSets['neg'], testSets['neg']) | |
− | + | print 'neg recall:', nltk.metrics.recall(referenceSets['neg'], testSets['neg']) | |
− | + | classifier.show_most_informative_features(10) | |
− | + | ==The Basic Method== | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | The Basic Method | ||
So after all that, we can start to figure out our feature selection mechanism. This basically means the way we select which words to train the classifier on. | So after all that, we can start to figure out our feature selection mechanism. This basically means the way we select which words to train the classifier on. | ||
Line 159: | Line 101: | ||
The most obvious feature selection mechanism is just to look at all the words in each review — it’s simple to code and provides a great base case. Here’s all we need: | The most obvious feature selection mechanism is just to look at all the words in each review — it’s simple to code and provides a great base case. Here’s all we need: | ||
− | + | def make_full_dict(words): | |
− | + | return dict([(word, True) for word in words]) | |
− | |||
− | |||
− | |||
− | def make_full_dict(words): | ||
− | |||
This just builds a dictionary object (what we need for the evaluate_features method) that has each of the words in the review followed by ‘True’. | This just builds a dictionary object (what we need for the evaluate_features method) that has each of the words in the review followed by ‘True’. | ||
Line 171: | Line 108: | ||
Then we can just run the testing method: | Then we can just run the testing method: | ||
− | + | print 'using all words as features' | |
− | + | evaluate_features(make_full_dict) | |
− | |||
− | |||
− | |||
− | print 'using all words as features' | ||
− | evaluate_features(make_full_dict) | ||
Here’s the output: | Here’s the output: | ||
− | using all words as features | + | using all words as features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.773068267067 | + | accuracy: 0.773068267067 |
− | pos precision: 0.787066246057 | + | pos precision: 0.787066246057 |
− | pos recall: 0.748687171793 | + | pos recall: 0.748687171793 |
− | neg precision: 0.760371959943 | + | neg precision: 0.760371959943 |
− | neg recall: 0.797449362341 | + | neg recall: 0.797449362341 |
− | Most Informative Features | + | Most Informative Features |
− | + | engrossing = True pos : neg = 17.0 : 1.0 | |
− | + | quiet = True pos : neg = 15.7 : 1.0 | |
− | + | mediocre = True neg : pos = 13.7 : 1.0 | |
− | + | absorbing = True pos : neg = 13.0 : 1.0 | |
− | + | portrait = True pos : neg = 12.4 : 1.0 | |
− | + | refreshing = True pos : neg = 12.3 : 1.0 | |
− | + | flaws = True pos : neg = 12.3 : 1.0 | |
− | + | inventive = True pos : neg = 12.3 : 1.0 | |
− | + | triumph = True pos : neg = 11.7 : 1.0 | |
− | + | refreshingly = True pos : neg = 11.7 : 1.0 | |
That might seem overwhelming, but we can go through it bit by bit! | That might seem overwhelming, but we can go through it bit by bit! | ||
Line 207: | Line 139: | ||
As a side note, there’s a couple interesting ones in there. For example, ‘flaws’ being in a review strongly indicates that the review is positive. Perhaps that’s because people rarely use “flaws” in a very negative sense, typically opting for stronger words. However, it is common to hear “it had some flaws, but…” | As a side note, there’s a couple interesting ones in there. For example, ‘flaws’ being in a review strongly indicates that the review is positive. Perhaps that’s because people rarely use “flaws” in a very negative sense, typically opting for stronger words. However, it is common to hear “it had some flaws, but…” | ||
− | The Next Step | + | |
+ | ==The Next Step== | ||
Moving on, the next way to select features is to only take the n most informative features — basically, the features that convey the most information. Again (for the millionth time), thanks to StreamHacker here. | Moving on, the next way to select features is to only take the n most informative features — basically, the features that convey the most information. Again (for the millionth time), thanks to StreamHacker here. | ||
Line 215: | Line 148: | ||
First, we broke up the words in a similar way to in the evaluate_features function and made them iterable (so that we could iterate through them): | First, we broke up the words in a similar way to in the evaluate_features function and made them iterable (so that we could iterate through them): | ||
− | + | def create_word_scores(): | |
− | + | #splits sentences into lines | |
− | + | posSentences = open('polarityData\\rt-polaritydata\\rt-polarity-pos.txt', 'r') | |
− | + | negSentences = open('polarityData\\rt-polaritydata\\rt-polarity-neg.txt', 'r') | |
− | + | posSentences = re.split(r'\n', posSentences.read()) | |
− | + | negSentences = re.split(r'\n', negSentences.read()) | |
− | + | ||
− | + | #creates lists of all positive and negative words | |
− | + | posWords = [] | |
− | + | negWords = [] | |
− | + | for i in posSentences: | |
− | + | posWord = re.findall(r"[\w']+|[.,!?;]", i) | |
− | + | posWords.append(posWord) | |
− | + | for i in negSentences: | |
− | + | negWord = re.findall(r"[\w']+|[.,!?;]", i) | |
− | + | negWords.append(negWord) | |
− | + | posWords = list(itertools.chain(*posWords)) | |
− | + | negWords = list(itertools.chain(*negWords)) | |
− | |||
− | |||
− | |||
− | def create_word_scores(): | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
Then we set up an overall frequency distribution of all the words, which can be visualized as a huge histogram with the number of each word in all the reviews combined. However, with just this line, all we do is initialize the frequency distribution — it’s actually empty: | Then we set up an overall frequency distribution of all the words, which can be visualized as a huge histogram with the number of each word in all the reviews combined. However, with just this line, all we do is initialize the frequency distribution — it’s actually empty: | ||
− | + | word_fd = FreqDist() | |
− | |||
− | |||
− | |||
− | word_fd = FreqDist() | ||
We’ll also need a conditional frequency distribution — a distribution that takes into account whether the word is in a positive or negative review. This can be visualized as two different histograms, one with all the words in positive reviews, and one with all the words in negative reviews. Like above, this is just an empty conditional frequency distribution. Nothing is in there yet. | We’ll also need a conditional frequency distribution — a distribution that takes into account whether the word is in a positive or negative review. This can be visualized as two different histograms, one with all the words in positive reviews, and one with all the words in negative reviews. Like above, this is just an empty conditional frequency distribution. Nothing is in there yet. | ||
− | + | cond_word_fd = ConditionalFreqDist() | |
− | |||
− | |||
− | |||
− | cond_word_fd = ConditionalFreqDist() | ||
Then, we essentially fill out the frequency distributions, incrementing (with .inc) the counter of each word within the appropriate distribution. | Then, we essentially fill out the frequency distributions, incrementing (with .inc) the counter of each word within the appropriate distribution. | ||
− | + | for word in posWords: | |
− | + | word_fd.inc(word.lower()) | |
− | + | cond_word_fd['pos'].inc(word.lower()) | |
− | + | for word in negWords: | |
− | + | word_fd.inc(word.lower()) | |
− | + | cond_word_fd['neg'].inc(word.lower()) | |
− | |||
− | |||
− | |||
− | for word in posWords: | ||
− | |||
− | |||
− | |||
− | |||
− | |||
The next thing we need to find the highest-information features is the count of words in positive reviews, words in negative reviews, and total words: | The next thing we need to find the highest-information features is the count of words in positive reviews, words in negative reviews, and total words: | ||
− | |||
− | |||
− | |||
− | + | pos_word_count = cond_word_fd['pos'].N() | |
− | + | neg_word_count = cond_word_fd['neg'].N() | |
− | pos_word_count = cond_word_fd['pos'].N() | + | total_word_count = pos_word_count + neg_word_count |
− | |||
− | |||
The last thing we need to do is use a chi-squared test test (also from NLTK) to score the words. We find each word’s positive information score and negative information score, add them up, and fill up a dictionary correlating the words and scores, which we then return out of the function. Chi-squared tests, as you can read in the Wikipedia article I just linked to, is a great way to see how much information a given input conveys. | The last thing we need to do is use a chi-squared test test (also from NLTK) to score the words. We find each word’s positive information score and negative information score, add them up, and fill up a dictionary correlating the words and scores, which we then return out of the function. Chi-squared tests, as you can read in the Wikipedia article I just linked to, is a great way to see how much information a given input conveys. | ||
− | + | word_scores = {} | |
− | + | for word, freq in word_fd.iteritems(): | |
− | + | pos_score = BigramAssocMeasures.chi_sq(cond_word_fd['pos'][word], (freq, pos_word_count), total_word_count) | |
− | + | neg_score = BigramAssocMeasures.chi_sq(cond_word_fd['neg'][word], (freq, neg_word_count), total_word_count) | |
− | + | word_scores[word] = pos_score + neg_score | |
− | + | ||
− | + | return word_scores | |
− | |||
− | |||
− | |||
− | word_scores = {} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
We then make another function that finds the best n words, given a set of scores (which we’ll calculate using the function we just made) and an n: | We then make another function that finds the best n words, given a set of scores (which we’ll calculate using the function we just made) and an n: | ||
− | + | def find_best_words(word_scores, number): | |
− | + | best_vals = sorted(word_scores.iteritems(), key=lambda (w, s): s, reverse=True)[:number] | |
− | + | best_words = set([w for w, s in best_vals]) | |
− | + | return best_words | |
− | |||
− | |||
− | |||
− | def find_best_words(word_scores, number): | ||
− | |||
− | |||
− | |||
Are you a little confused by that ‘lambda’? I certainly was when I saw it. Essentially what it does is allow you to temporarily make a function to return something. In this case, it’s helping to sort the words into the correct order. You can check this out for more information on lambda functions. | Are you a little confused by that ‘lambda’? I certainly was when I saw it. Essentially what it does is allow you to temporarily make a function to return something. In this case, it’s helping to sort the words into the correct order. You can check this out for more information on lambda functions. | ||
Line 339: | Line 212: | ||
Finally, we can make a feature selection mechanism that returns ‘True’ for a word only if it is in the best words list: | Finally, we can make a feature selection mechanism that returns ‘True’ for a word only if it is in the best words list: | ||
− | + | def best_word_features(words): | |
− | + | return dict([(word, True) for word in words if word in best_words]) | |
− | |||
− | |||
− | |||
− | def best_word_features(words): | ||
− | |||
Last, I ran it using the best 10, 100, 1000, 10000, and 15000 words. Here’s the code to do that: | Last, I ran it using the best 10, 100, 1000, 10000, and 15000 words. Here’s the code to do that: | ||
− | + | numbers_to_test = [10, 100, 1000, 10000, 15000] | |
− | + | #tries the best_word_features mechanism with each of the numbers_to_test of features | |
− | + | for num in numbers_to_test: | |
− | + | print 'evaluating best %d word features' % (num) | |
− | + | best_words = find_best_words(word_scores, num) | |
− | + | evaluate_features(best_word_features) | |
− | |||
− | |||
− | |||
− | numbers_to_test = [10, 100, 1000, 10000, 15000] | ||
− | #tries the best_word_features mechanism with each of the numbers_to_test of features | ||
− | for num in numbers_to_test: | ||
− | |||
− | |||
− | |||
Here’s my output (I’ve cut out the informative features list because it’s the same for all of them, including the one using all the features): | Here’s my output (I’ve cut out the informative features list because it’s the same for all of them, including the one using all the features): | ||
− | evaluating best 10 word features | + | evaluating best 10 word features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.574643660915 | + | accuracy: 0.574643660915 |
− | pos precision: 0.549379652605 | + | pos precision: 0.549379652605 |
− | pos recall: 0.830457614404 | + | pos recall: 0.830457614404 |
− | neg precision: 0.652841781874 | + | neg precision: 0.652841781874 |
− | neg recall: 0.318829707427 | + | neg recall: 0.318829707427 |
− | evaluating best 100 word features | + | evaluating best 100 word features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.682295573893 | + | accuracy: 0.682295573893 |
− | pos precision: 0.659868421053 | + | pos precision: 0.659868421053 |
− | pos recall: 0.752438109527 | + | pos recall: 0.752438109527 |
− | neg precision: 0.712041884817 | + | neg precision: 0.712041884817 |
− | neg recall: 0.61215303826 | + | neg recall: 0.61215303826 |
− | evaluating best 1000 word features | + | evaluating best 1000 word features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.79632408102 | + | accuracy: 0.79632408102 |
− | pos precision: 0.817014446228 | + | pos precision: 0.817014446228 |
− | pos recall: 0.763690922731 | + | pos recall: 0.763690922731 |
− | neg precision: 0.778169014085 | + | neg precision: 0.778169014085 |
− | neg recall: 0.82895723931 | + | neg recall: 0.82895723931 |
− | evaluating best 10000 word features | + | evaluating best 10000 word features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.846586646662 | + | accuracy: 0.846586646662 |
− | pos precision: 0.868421052632 | + | pos precision: 0.868421052632 |
− | pos recall: 0.81695423856 | + | pos recall: 0.81695423856 |
− | neg precision: 0.827195467422 | + | neg precision: 0.827195467422 |
− | neg recall: 0.876219054764 | + | neg recall: 0.876219054764 |
− | evaluating best 15000 word features | + | evaluating best 15000 word features |
− | train on 7998 instances, test on 2666 instances | + | train on 7998 instances, test on 2666 instances |
− | accuracy: 0.846961740435 | + | accuracy: 0.846961740435 |
− | pos precision: 0.862745098039 | + | pos precision: 0.862745098039 |
− | pos recall: 0.825206301575 | + | pos recall: 0.825206301575 |
− | neg precision: 0.832494608196 | + | neg precision: 0.832494608196 |
− | neg recall: 0.868717179295 | + | neg recall: 0.868717179295 |
− | The Conclusions | + | ==The Conclusions== |
There’s lots to read into here. Obviously, using very few features didn’t do great in terms of accuracy, precision, and recall because there wasn’t enough data to build the model off of. Using 1000 features is about as good as using all the features, but at 10000 and 15000, there’s a pretty huge increase over the base case, getting up to ~85% accuracy and similar precision and recall statistics. | There’s lots to read into here. Obviously, using very few features didn’t do great in terms of accuracy, precision, and recall because there wasn’t enough data to build the model off of. Using 1000 features is about as good as using all the features, but at 10000 and 15000, there’s a pretty huge increase over the base case, getting up to ~85% accuracy and similar precision and recall statistics. | ||
Line 417: | Line 276: | ||
Of course, there are tons of ways to improve these results. These include (but are not limited to): | Of course, there are tons of ways to improve these results. These include (but are not limited to): | ||
− | + | * Adding different feature selection mechanisms | |
− | + | * Pre-processing the text to get rid of unimportant words or punctuation | |
− | + | * Doing deeper analysis of the sentences as a whole | |
− | + | * Trying a different classifier than the Naive Bayes Classifier | |
A disclaimer applies: we’re just learning all of this, and fairly independently too. There’s a decent chance that there’s a mistake or an inappropriate conclusion somewhere. If there is, please don’t hesitate to email me. I’d also love to hear from you if you have any other input on the project! | A disclaimer applies: we’re just learning all of this, and fairly independently too. There’s a decent chance that there’s a mistake or an inappropriate conclusion somewhere. If there is, please don’t hesitate to email me. I’d also love to hear from you if you have any other input on the project! | ||
− | For the full code, check out the GitHub repository. | + | For the full code, check out the GitHub repository https://github.com/abromberg/sentiment_analysis_python |
− | Addendum | + | |
+ | ==Addendum== | ||
Pieter Sheth-Voss pointed out in the comments section that there’s a linear relationship between the accuracy and the log of the number of features. I decided to take a look at this in R. Here’s the code I used. | Pieter Sheth-Voss pointed out in the comments section that there’s a linear relationship between the accuracy and the log of the number of features. I decided to take a look at this in R. Here’s the code I used. | ||
− | + | library(ggplot2) | |
− | + | features <- c(10, 50, 100, 250, 500, 750, 1000, 2500, 5000, 7500, 10000, 12500, 15000) | |
− | + | logFeatures <- log(features) | |
− | + | accuracy <- c(0.5746436609152288, 0.6406601650412603, 0.6822955738934734, 0.7291822955738935, 0.7659414853713429, 0.7861965491372843, 0.7963240810202551, 0.8304576144036009, 0.8510877719429858, 0.8465866466616654, 0.8465866466616654, 0.8465866466616654, 0.8469617404351087) | |
− | + | data <- data.frame(logFeatures, accuracy) | |
− | + | ggplot(data, aes(x=logFeatures, y=accuracy)) + | |
− | + | geom_point(shape=1) + | |
− | + | geom_smooth(method=lm) | |
− | |||
− | |||
− | |||
− | library(ggplot2) | ||
− | features <- c(10, 50, 100, 250, 500, 750, 1000, 2500, 5000, 7500, 10000, 12500, 15000) | ||
− | logFeatures <- log(features) | ||
− | accuracy <- c(0.5746436609152288, 0.6406601650412603, 0.6822955738934734, 0.7291822955738935, 0.7659414853713429, 0.7861965491372843, 0.7963240810202551, 0.8304576144036009, 0.8510877719429858, 0.8465866466616654, 0.8465866466616654, 0.8465866466616654, 0.8469617404351087) | ||
− | data <- data.frame(logFeatures, accuracy) | ||
− | ggplot(data, aes(x=logFeatures, y=accuracy)) + | ||
− | |||
− | |||
This takes advantage of the great ggplot2 library for R. It simply puts in the features, calculates the log of them, puts in the accuracy, and plots these two variables. Here’s the output: | This takes advantage of the great ggplot2 library for R. It simply puts in the features, calculates the log of them, puts in the accuracy, and plots these two variables. Here’s the output: | ||
− | features_performance_all | + | features_performance_all |
I noticed here at the last 4 values seem a bit out of line and ran the script again with those removed to see how well the relationship performed through 5000 features: | I noticed here at the last 4 values seem a bit out of line and ran the script again with those removed to see how well the relationship performed through 5000 features: | ||
− | features_performance_partial | + | features_performance_partial |
This is quite a strong correlation. Very interesting how well that worked out. Thanks for the heads up, Pieter! | This is quite a strong correlation. Very interesting how well that worked out. Thanks for the heads up, Pieter! |
Latest revision as of 05:09, 9 February 2017
Second Try: Sentiment Analysis in Python by andy
Introduction
After my first experiments with using R for sentiment analysis, I started talking with a friend here at school about my work. Jackson and I decided that we’d like to give it a better shot and really try to get some meaningful results. After a lot of research, we decided to shift languages to Python (even though we both know R). We made this shift because Python has a number of very useful libraries for text processing and sentiment analysis, plus it’s easy to code in. We launched right into tutorials and coding, and this post will be about that process and our results.
We also met with Christopher Potts, a professor of linguistics here at Stanford. Prior to meeting with him, we consulted his sentiment analysis guide extensively and found it incredibly useful. We had a fantastic chat with Professor Potts and he helped us grasp some of the concepts we were working on.
If you’d like to jump straight to seeing the full code, you can head over to the GitHub repository https://raw.githubusercontent.com/abromberg/sentiment_analysis_python/master/sentiment_analysis.py
The Setup
One of the resources we got a lot of mileage out of was StreamHacker, especially the articles on basic techniques, precision and recall. and eliminating features. More to follow about each of those elements.
Another great discovery was the Natural Language ToolKit (NLTK). This is an incredible library for Python that can do a huge amount of text processing and analysis. This would end up forming the basis for our program.
During our first attempt, we basically just tried to convert my program in R into Python. We quickly realized that not only did Python have more efficient ways to do some of the steps, but it also was missing some functionality that I used in the R version. So instead, we started based on StreamHacker’s code.
An important piece of sentiment analysis terminology: “features” are whatever you’re analyzing in an attempt to correlate to the labels. For example, in this code, the features will be the words in each review. Other algorithms could use different types of features — some algorithms use bigrams or trigrams (strings of two or three consecutive words, respectively) as the features.
An idea from StreamHacker that we really liked was writing a function to evaluate different feature selection mechanisms. That means that we would be able to write different methods to select different subsets of the features (read: words) in the reviews and then evaluate those methods.
As an aside, here are the imports we used for the project, so I won’t have to reference them again:
import re, math, collections, itertools import nltk, nltk.classify.util, nltk.metrics from nltk.classify import NaiveBayesClassifier from nltk.metrics import BigramAssocMeasures from nltk.probability import FreqDist, ConditionalFreqDist
We began to write our feature evaluator (again, many thanks to StreamHacker.):
def evaluate_features(feature_select): #reading pre-labeled input and splitting into lines posSentences = open('polarityData\\rt-polaritydata\\rt-polarity-pos.txt', 'r') negSentences = open('polarityData\\rt-polaritydata\\rt-polarity-neg.txt', 'r') posSentences = re.split(r'\n', posSentences.read()) negSentences = re.split(r'\n', negSentences.read()) posFeatures = [] negFeatures = [] #http://stackoverflow.com/questions/367155/splitting-a-string-into-words-and-punctuation #breaks up the sentences into lists of individual words (as selected by the input mechanism) and appends 'pos' or 'neg' after each list for i in posSentences: posWords = re.findall(r"[\w']+|[.,!?;]", i) posWords = [feature_select(posWords), 'pos'] posFeatures.append(posWords) for i in negSentences: negWords = re.findall(r"[\w']+|[.,!?;]", i) negWords = [feature_select(negWords), 'neg'] negFeatures.append(negWords)
This is the same polarity data that was used in my previous post, so check that out if you’re curious about the data. Essentially what that block of code does is splits up the reviews by line and then builds a posFeatures variable which contains the output of our feature selection mechanism (we’ll see how that works in a minute) with ‘pos’ or ‘neg’ appended to it, depending on whether the review it is drawing from is positive or negative.
The next bit of code separates the data into training and testing data for a Naive Bayes classifier, which is the same type of classifier I used before.
#selects 3/4 of the features to be used for training and 1/4 to be used for testing posCutoff = int(math.floor(len(posFeatures)*3/4)) negCutoff = int(math.floor(len(negFeatures)*3/4)) trainFeatures = posFeatures[:posCutoff] + negFeatures[:negCutoff] testFeatures = posFeatures[posCutoff:] + negFeatures[negCutoff:]
Now, thanks to NLTK, I can very simply train my classifier:
classifier = NaiveBayesClassifier.train(trainFeatures)
Pretty cool, huh? The last thing this function needs to do is check how well the classifier does when it tries to classify the testing data. This code is a little challenging so I’ll walk through it thoroughly.
First, I have to initiate referenceSets and testSets, to be used shortly. referenceSets will contain the actual values for the testing data (which we know because the data is prelabeled) and testSets will contain the predicted output.
Next, for each one of the testFeatures (the reviews that need testing), I iterate through three things: an arbitrary ‘i’, so be used as an identifier, and then the features (or words) in the review, and the actual label (‘pos’ or ‘neg’).
I add the ‘i’ (the unique identifier) to the correct bin in referenceSets. I then predict the label based on the features using the trained classifier and put the unique identifier in the predicted bin in testSets.
for i, (features, label) in enumerate(testFeatures): referenceSets[label].add(i) predicted = classifier.classify(features) testSets[predicted].add(i)
This gives me a big list of identifiers in referenceSets[‘pos’], which are the reviews known to be positive (and the same for the negative reviews). It also gives me a list of identifiers in testSets[‘pos’], which are the reviews predicted to be positive (and similarly for predicted negatives). What this allows me to do is to compare these lists and see how well the predictor did. Here’s where one of the StreamHacker articles (on precision and recall) really helped.
The essence of those two terms is that precision is a measure of false positives — a higher precision means fewer reviews that aren’t in the desired label get labeled as being in there. A high recall means fewer reviews that are in the desired label get put in the wrong level. As you can imagine, these metrics correlate very closely. Here’s the code (again from the NLTK library) to print out the positive and negative recall and precision, as well as the accuracy (a less-specific measure just showing what percentage the classifier got right). NLTK also has a cool function that shows the features (words) that were most helpful to the classifier in determining whether a review was positive or negative. So here’s the code:
print 'train on %d instances, test on %d instances' % (len(trainFeatures), len(testFeatures)) print 'accuracy:', nltk.classify.util.accuracy(classifier, testFeatures) print 'pos precision:', nltk.metrics.precision(referenceSets['pos'], testSets['pos']) print 'pos recall:', nltk.metrics.recall(referenceSets['pos'], testSets['pos']) print 'neg precision:', nltk.metrics.precision(referenceSets['neg'], testSets['neg']) print 'neg recall:', nltk.metrics.recall(referenceSets['neg'], testSets['neg']) classifier.show_most_informative_features(10)
The Basic Method
So after all that, we can start to figure out our feature selection mechanism. This basically means the way we select which words to train the classifier on.
In my previous post, I didn’t actually train the classifier on words at all. The classifier was trained on the number of words in each category from very negative to very positive. With this Python program, Jackson and I chose to look at the individual words themselves rather than counting positive and negative words.
We did this because there is inherent error in picking positive and negative words — there’s a huge loss of information there: sentence-long reviews were reduced down to just a few digits. With this method, we’re keeping a lot more of the information in the review (1).
The most obvious feature selection mechanism is just to look at all the words in each review — it’s simple to code and provides a great base case. Here’s all we need:
def make_full_dict(words): return dict([(word, True) for word in words])
This just builds a dictionary object (what we need for the evaluate_features method) that has each of the words in the review followed by ‘True’.
Then we can just run the testing method:
print 'using all words as features' evaluate_features(make_full_dict)
Here’s the output:
using all words as features train on 7998 instances, test on 2666 instances accuracy: 0.773068267067 pos precision: 0.787066246057 pos recall: 0.748687171793 neg precision: 0.760371959943 neg recall: 0.797449362341 Most Informative Features engrossing = True pos : neg = 17.0 : 1.0 quiet = True pos : neg = 15.7 : 1.0 mediocre = True neg : pos = 13.7 : 1.0 absorbing = True pos : neg = 13.0 : 1.0 portrait = True pos : neg = 12.4 : 1.0 refreshing = True pos : neg = 12.3 : 1.0 flaws = True pos : neg = 12.3 : 1.0 inventive = True pos : neg = 12.3 : 1.0 triumph = True pos : neg = 11.7 : 1.0 refreshingly = True pos : neg = 11.7 : 1.0
That might seem overwhelming, but we can go through it bit by bit!
First, we see that the accuracy is 77% — this is already seeming a lot better than my first attempt with R. Then we see that the precisions and recalls are all pretty close to each other, which means that it is classifying everything fairly evenly. No problems there, and we don’t need to read a whole lot more into it.
Then we see the most informative features — for example, if ‘engrossing’ is in a review, there’s a 17:1 chance the review is positive.
As a side note, there’s a couple interesting ones in there. For example, ‘flaws’ being in a review strongly indicates that the review is positive. Perhaps that’s because people rarely use “flaws” in a very negative sense, typically opting for stronger words. However, it is common to hear “it had some flaws, but…”
The Next Step
Moving on, the next way to select features is to only take the n most informative features — basically, the features that convey the most information. Again (for the millionth time), thanks to StreamHacker here.
We first need to find the information gain of each word. This is a big chunk of code, but we’ll break it up.
First, we broke up the words in a similar way to in the evaluate_features function and made them iterable (so that we could iterate through them):
def create_word_scores(): #splits sentences into lines posSentences = open('polarityData\\rt-polaritydata\\rt-polarity-pos.txt', 'r') negSentences = open('polarityData\\rt-polaritydata\\rt-polarity-neg.txt', 'r') posSentences = re.split(r'\n', posSentences.read()) negSentences = re.split(r'\n', negSentences.read()) #creates lists of all positive and negative words posWords = [] negWords = [] for i in posSentences: posWord = re.findall(r"[\w']+|[.,!?;]", i) posWords.append(posWord) for i in negSentences: negWord = re.findall(r"[\w']+|[.,!?;]", i) negWords.append(negWord) posWords = list(itertools.chain(*posWords)) negWords = list(itertools.chain(*negWords))
Then we set up an overall frequency distribution of all the words, which can be visualized as a huge histogram with the number of each word in all the reviews combined. However, with just this line, all we do is initialize the frequency distribution — it’s actually empty:
word_fd = FreqDist()
We’ll also need a conditional frequency distribution — a distribution that takes into account whether the word is in a positive or negative review. This can be visualized as two different histograms, one with all the words in positive reviews, and one with all the words in negative reviews. Like above, this is just an empty conditional frequency distribution. Nothing is in there yet.
cond_word_fd = ConditionalFreqDist()
Then, we essentially fill out the frequency distributions, incrementing (with .inc) the counter of each word within the appropriate distribution.
for word in posWords: word_fd.inc(word.lower()) cond_word_fd['pos'].inc(word.lower()) for word in negWords: word_fd.inc(word.lower()) cond_word_fd['neg'].inc(word.lower())
The next thing we need to find the highest-information features is the count of words in positive reviews, words in negative reviews, and total words:
pos_word_count = cond_word_fd['pos'].N() neg_word_count = cond_word_fd['neg'].N() total_word_count = pos_word_count + neg_word_count
The last thing we need to do is use a chi-squared test test (also from NLTK) to score the words. We find each word’s positive information score and negative information score, add them up, and fill up a dictionary correlating the words and scores, which we then return out of the function. Chi-squared tests, as you can read in the Wikipedia article I just linked to, is a great way to see how much information a given input conveys.
word_scores = {} for word, freq in word_fd.iteritems(): pos_score = BigramAssocMeasures.chi_sq(cond_word_fd['pos'][word], (freq, pos_word_count), total_word_count) neg_score = BigramAssocMeasures.chi_sq(cond_word_fd['neg'][word], (freq, neg_word_count), total_word_count) word_scores[word] = pos_score + neg_score return word_scores
We then make another function that finds the best n words, given a set of scores (which we’ll calculate using the function we just made) and an n:
def find_best_words(word_scores, number): best_vals = sorted(word_scores.iteritems(), key=lambda (w, s): s, reverse=True)[:number] best_words = set([w for w, s in best_vals]) return best_words
Are you a little confused by that ‘lambda’? I certainly was when I saw it. Essentially what it does is allow you to temporarily make a function to return something. In this case, it’s helping to sort the words into the correct order. You can check this out for more information on lambda functions.
Finally, we can make a feature selection mechanism that returns ‘True’ for a word only if it is in the best words list:
def best_word_features(words): return dict([(word, True) for word in words if word in best_words])
Last, I ran it using the best 10, 100, 1000, 10000, and 15000 words. Here’s the code to do that:
numbers_to_test = [10, 100, 1000, 10000, 15000] #tries the best_word_features mechanism with each of the numbers_to_test of features for num in numbers_to_test: print 'evaluating best %d word features' % (num) best_words = find_best_words(word_scores, num) evaluate_features(best_word_features)
Here’s my output (I’ve cut out the informative features list because it’s the same for all of them, including the one using all the features):
evaluating best 10 word features train on 7998 instances, test on 2666 instances accuracy: 0.574643660915 pos precision: 0.549379652605 pos recall: 0.830457614404 neg precision: 0.652841781874 neg recall: 0.318829707427
evaluating best 100 word features train on 7998 instances, test on 2666 instances accuracy: 0.682295573893 pos precision: 0.659868421053 pos recall: 0.752438109527 neg precision: 0.712041884817 neg recall: 0.61215303826
evaluating best 1000 word features train on 7998 instances, test on 2666 instances accuracy: 0.79632408102 pos precision: 0.817014446228 pos recall: 0.763690922731 neg precision: 0.778169014085 neg recall: 0.82895723931
evaluating best 10000 word features train on 7998 instances, test on 2666 instances accuracy: 0.846586646662 pos precision: 0.868421052632 pos recall: 0.81695423856 neg precision: 0.827195467422 neg recall: 0.876219054764
evaluating best 15000 word features train on 7998 instances, test on 2666 instances accuracy: 0.846961740435 pos precision: 0.862745098039 pos recall: 0.825206301575 neg precision: 0.832494608196 neg recall: 0.868717179295
The Conclusions
There’s lots to read into here. Obviously, using very few features didn’t do great in terms of accuracy, precision, and recall because there wasn’t enough data to build the model off of. Using 1000 features is about as good as using all the features, but at 10000 and 15000, there’s a pretty huge increase over the base case, getting up to ~85% accuracy and similar precision and recall statistics.
That means that using intelligent feature selection increased the accuracy by around 8 percentage points, which seems like quite a significant jump. Jackson and I were very happy about that.
We’re also happy about the results as a whole — classifying reviews with over 80% accuracy is pretty impressive and we can see lots of applications for this technology.
Of course, there are tons of ways to improve these results. These include (but are not limited to):
- Adding different feature selection mechanisms
- Pre-processing the text to get rid of unimportant words or punctuation
- Doing deeper analysis of the sentences as a whole
- Trying a different classifier than the Naive Bayes Classifier
A disclaimer applies: we’re just learning all of this, and fairly independently too. There’s a decent chance that there’s a mistake or an inappropriate conclusion somewhere. If there is, please don’t hesitate to email me. I’d also love to hear from you if you have any other input on the project!
For the full code, check out the GitHub repository https://github.com/abromberg/sentiment_analysis_python
Addendum
Pieter Sheth-Voss pointed out in the comments section that there’s a linear relationship between the accuracy and the log of the number of features. I decided to take a look at this in R. Here’s the code I used.
library(ggplot2) features <- c(10, 50, 100, 250, 500, 750, 1000, 2500, 5000, 7500, 10000, 12500, 15000) logFeatures <- log(features) accuracy <- c(0.5746436609152288, 0.6406601650412603, 0.6822955738934734, 0.7291822955738935, 0.7659414853713429, 0.7861965491372843, 0.7963240810202551, 0.8304576144036009, 0.8510877719429858, 0.8465866466616654, 0.8465866466616654, 0.8465866466616654, 0.8469617404351087) data <- data.frame(logFeatures, accuracy) ggplot(data, aes(x=logFeatures, y=accuracy)) + geom_point(shape=1) + geom_smooth(method=lm)
This takes advantage of the great ggplot2 library for R. It simply puts in the features, calculates the log of them, puts in the accuracy, and plots these two variables. Here’s the output:
features_performance_all
I noticed here at the last 4 values seem a bit out of line and ran the script again with those removed to see how well the relationship performed through 5000 features:
features_performance_partial
This is quite a strong correlation. Very interesting how well that worked out. Thanks for the heads up, Pieter!