Hackers Realm
Twitter Sentiment Analysis using Python (NLP) | Machine Learning Project Tutorial
Updated: Apr 15
Twitter Sentiment Analysis is a classification project that comes under Natural Language Processing. The objective of the project is to analyze the tweets and classify whether the tweet is positive or negative sentiment.

In this project tutorial, we are going to analyze and classify tweets from the dataset using a classifying model and visualize the frequent words using plot graphs.
You can watch the step by step explanation video tutorial down below
Dataset Information
The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets.
Formally, given a training sample of tweets and labels, where label '1' denotes the tweet is racist/sexist and label '0' denotes the tweet is not racist/sexist, your objective is to predict the labels on the test dataset.
For training the models, we provide a labelled dataset of 31,962 tweets. The dataset is provided in the form of a csv file with each line storing a tweet id, its label and the tweet.
In this analysis we’re going to process text based data, machines can’t understand text-oriented data so we’ll convert the text to vectors and proceed further.
Download the dataset here
Import modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import re
import string
import nltk
import warnings
%matplotlib inline
warnings.filterwarnings('ignore')
pandas - used to perform data manipulation and analysis
numpy - used to perform a wide variety of mathematical operations on arrays
matplotlib - used for data visualization and graphical plotting
seaborn - built on top of matplotlib with similar functionalities
re – used as a regular expression to find particular patterns and process it
string – used to obtain information in the string and manipulate the string overall
nltk – a natural language processing toolkit module associated in anaconda
warnings - to manipulate warnings details
%matplotlib - to enable the inline plotting
filterwarnings('ignore') is to ignore the warnings thrown by the modules (gives clean results)
Loading the dataset
df = pd.read_csv('Twitter Sentiments.csv')
df.head()

pd.read_csv() loads the csv(comma seperated value) data into a dataframe
df.head() displays the 5 first rows from the dataframe
Zero (0) indicates it’s a positive sentiment.
One (1) indicates it’s a negative sentiment (racist/sexist).
# datatype info
df.info()

The 'tweet' column is an object which will be processed as a string passing the tweets listed above in the pre-processing step.
Preprocessing the dataset
# removes pattern in the input text
def remove_pattern(input_txt, pattern):
r = re.findall(pattern, input_txt)
for word in r:
input_txt = re.sub(word, "", input_txt)
return input_txt
This function works to remove certain patterns in the text for preprocessing
df.head()

# remove twitter handles (@user)
df['clean_tweet'] = np.vectorize(remove_pattern)(df['tweet'], "@[\w]*")
df.head()

"@[\w]*" is the twitter handle pattern to remove in the text for preprocessing
# remove special characters, numbers and punctuations
df['clean_tweet'] = df['clean_tweet'].str.replace("[^a-zA-Z#]", " ")
df.head()

[^a-zA-Z#] is the parameter to remove all special characters, numbers and punctuations
# remove short words
df['clean_tweet'] = df['clean_tweet'].apply(lambda x: " ".join([w for w in x.split() if len(w)>3]))
df.head()

Process to remove shorter words less than 3 characters long.
# individual words considered as tokens
tokenized_tweet = df['clean_tweet'].apply(lambda x: x.split())tokenized_tweet.head()

Individual words separated as tokens to facilitate further processing as strings
# stem the words
from nltk.stem.porter import PorterStemmerstemmer = PorterStemmer()
tokenized_tweet = tokenized_tweet.apply(lambda sentence: [stemmer.stem(word) for word in sentence])
tokenized_tweet.head()

Stemmer.stem() converts certain words into its simpler version.
# combine words into single sentence
for i in range(len(tokenized_tweet)):
tokenized_tweet[i] = " ".join(tokenized_tweet[i])
df['clean_tweet'] = tokenized_tweet
df.head()

Combining the tokenized words into a sentence
Exploratory Data Analysis
In Exploratory Data Analysis (EDA), we will visualize the data with different kinds of plots for inference. It is helpful to find some patterns (or) relations within the data
!pip install wordcloud
Collecting wordcloud
Downloading wordcloud-1.8.1-cp38-cp38-win_amd64.whl (155 kB)
Requirement already satisfied: pillow in c:\programdata\anaconda3\lib\site-packages (from wordcloud) (7.2.0)
Requirement already satisfied: numpy>=1.6.1 in c:\programdata\anaconda3\lib\site-packages (from wordcloud) (1.18.5)
Requirement already satisfied: matplotlib in c:\programdata\anaconda3\lib\site-packages (from wordcloud) (3.2.2)
Requirement already satisfied: python-dateutil>=2.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->wordcloud) (2.8.1)
Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->wordcloud) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->wordcloud) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\lib\site-packages (from matplotlib->wordcloud) (1.2.0)
Requirement already satisfied: six>=1.5 in c:\programdata\anaconda3\lib\site-packages (from python-dateutil>=2.1->matplotlib->wordcloud) (1.15.0)
Installing collected packages: wordcloud
Successfully installed wordcloud-1.8.1
Necessary installation process to use the wordcloud
# visualize the frequent words
all_words = " ".join([sentence for sentence in df['clean_tweet']])
from wordcloud import WordCloud
wordcloud = WordCloud(width=800, height=500, random_state=42, max_font_size=100).generate(all_words)
# plot the graph
plt.figure(figsize=(15,8))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()

Filtering all frequent words from the data to plot graph using the wordcloud
The plot displaying many positive words and a few negative words
# frequent words visualization for +ve
all_words = " ".join([sentence for sentence in df['clean_tweet'][df['label']==0]])
wordcloud = WordCloud(width=800, height=500, random_state=42, max_font_size=100).generate(all_words)
# plot the graph
plt.figure(figsize=(15,8))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()

Filtering more frequent positive words adding a new parameter [df['label']==0]]) indicating positive sentiments
Comparing with the previous plot graph, there’s more positive words
# frequent words visualization for -ve
all_words = " ".join([sentence for sentence in df['clean_tweet'][df['label']==1]])
wordcloud = WordCloud(width=800, height=500, random_state=42, max_font_size=100).generate(all_words)
# plot the graph
plt.figure(figsize=(15,8))
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()

For the negative sentiment it’s exactly the same code but changing the value of label to one (1), filtering racist/sexist words used.
# extract the hashtag
def hashtag_extract(tweets):
hashtags = []
# loop words in the tweet
for tweet in tweets:
ht = re.findall(r"#(\w+)", tweet)
hashtags.append(ht)
return hashtags