Data Analysis of Tuscaloosa Tweets

I wanted to play around with Sentiment Analysis of Tweets; specifically, I wanted to try the Python TextBlob library, which has a built-in function that performs text analysis to determine if a string has a positive or negative sentiment. After pondering a bit, I decided it would be fun to search for tweets that were created specifically within the city limits of Tuscaloosa, where I am currently attending school. I wrote a script that scrapes Twitter and returns tweets by geolocation, and then uses TextBlob on the results.

# -*- coding: utf-8 -*-
Created on Wed Jul  6 15:58:58 2022

@author: austin

import snscrape.modules.twitter as sntwitter #Social Network Scraping Library
import pandas as pd #so I can make a dataframe of results
from textblob import TextBlob
import csv
import time

#Tuscaloosa = geocode:33.23726448661455,-87.58279011262114,20km
query = "geocode:33.23726448661455,-87.58279011262114,20km"
tweets = []
combinedtweets = []
limit = 10000000 #set a limit on how many results I want to pull

for tweet in sntwitter.TwitterSearchScraper(query).get_items():
    if len(tweets) == limit:
        # set sentiment 
        text = tweet.content
        analysis = TextBlob(text)
        if analysis.sentiment.polarity >= 0:
            sentiment = 'positive'
            sentiment = 'negative'
        tweets.append([, tweet.user.username, tweet.content, sentiment])

df = pd.DataFrame(tweets, columns=['Date', 'User','Tweet', 'Sentiment'])
df.to_csv('twitter_scrape_results.csv') #save dataframe as csv

print("\014") #clear console
print("CSV Successfully Created")

The results were pretty interesting (I uploaded the dataset to Kaggle if anyone is interested). It seems sentiment stays roughly the same each year, hovering around 85% positive and 15% negative. I really would have thought negative sentiment would be much higher based on my personal observations of Twitter content: makes me wonder if Tuscaloosa is an unusually happy place, or if my Twitter observations are influenced by negative bias…

In any case, perhaps a more interesting bit of data is that the total amount of Tweets seems to decline quite a bit each year. This raises the question, why are Tuscaloosians tweeting less often? I put the results into this Tableau dashboard, which displays just how steady and steep a decline there has been.


I decided to test a hypothesis: perhaps the high level of positive tweet sentiment is due to the fact that this is a college town, and numerous tweets were posted by official University of Alabama departments? I used OpenRefine to filter out official UA accounts, which was easy enough to do since their usernames seem to either begin with “UA_” or end with “_UA”. Surprisingly though, that didn’t change the sentiment percentages at all. I now suspect that even if you factor in all official UA Twitter accounts, you would also have to factor for the fact that a large number of Tuscaloosians work for UA (45,000 employees). I know many of my professors post UA related content using their personal Twitter accounts, and by design this content will logically slant positive.

Data Analysis of the MechanicalKeyboards Subreddit

Developers tend to take their keyboards seriously. I have been using classic buckling spring IBM Model M computer keyboards since I first began programming. These are great to type on, and I still love them (kind of feels like typing on a typewriter), but I decided recently that I should upgrade to a compact keyboard that uses modern mechanical switches. This would give me more space on my desk, and allow for some customization. There seems to be an endless sea of options to choose from, though; the first step in my consumer journey is to narrow my options down to a few top brands, so what is a developer to do? I thought a good way to cut through the clutter would be to scrape the r/MechanicalKeyboards subreddit to see what brands are the most talked about currently. So I wrote this Python script that uses Reddit’s API to scrape the subreddit.

import praw
from praw.models import MoreComments
import datetime
import pandas as pd

# Lets use PRAW (a Python wrapper for the Reddit API)
reddit = praw.Reddit(client_id='', client_secret='', user_agent='')

# Scraping the posts
posts = reddit.subreddit('MechanicalKeyboards').hot(limit=None) # Sorted by hottest
posts_dict = {"Title": [], "Post Text": [], "Date":[],
               "Score": [], "ID": [],
              "Total Comments": [], "Post URL": []

comments_dict = {"Title": [], "Comment": [], "Date":[],
              "Score": [], "ID": [], "Post URL": []

for post in posts:
    # Title of each post
    # Text inside a post
    posts_dict["Post Text"].append(post.selftext)
    # Date of each post
    dt = # Convert UTC to DateTime
    # The score of a post
    # Unique ID of each post
    # Total number of comments inside the post
    posts_dict["Total Comments"].append(post.num_comments)
    # URL of each post
    posts_dict["Post URL"].append(post.url)
    # Now we need to scrape the comments on the posts
    id =
    submission = reddit.submission(id)
    submission.comments.replace_more(limit=0) # Use replace_more to remove all MoreComments
    # Use .list() method to also get the comments of the comments
    for comment in submission.comments.list(): 
        # Title of each post
        # The comment
        # Date of each comment
        dt = # Convert UTC to DateTime
        # The score of a comment
        # Unique ID of each post
        # URL of each post
        comments_dict["Post URL"].append(post.url)

# Saving the data in pandas dataframes
allPosts = pd.DataFrame(posts_dict)

allComments = pd.DataFrame(comments_dict)

# Time to output everything to csv files
allPosts.to_csv("MechanicalKeyboards_Posts.csv", index=True)
allComments.to_csv("MechanicalKeyboards_Comments.csv", index=True)

Reddit limits API requests to 1000 posts, so the most current 1000 posts is my sample size. My code outputs two files: the last 1000 posts, and more importantly the comments on those 1000 posts, which ended up being 9042 rows of data. (I posted the files to Kaggle if anyone would like to play with them.) Then I imported my comments dataset into OpenRefine so I could run text filters to find brand names, and I recorded the number of mentions for each brand. Finally, using Tableau, I created a couple of Data Visualization charts to express my findings. Here are the most talked about keyboard brands on r/MechanicalKeyboards currently:


I decided to go with the Keychron keyboard that my research found to be the most discussed (and I also added Glorious Panda Switches and HK Gaming PBT Keycaps). Couldn’t be happier; it’s a pleasure to type on.