I have been trying to teach myself game development using Unity for a few weeks now. I’ve put together a project that I’m calling “Robot Slayer”: a FPS that I created to be test environment for my script writing. Then Unity decided to pull the rug out from under all developers this week with their predatory pricing changes, and I expect a lot of studios will abandon the platform moving forward. I think I am going to switch to learning Unreal and C++, and keep an eye on how Godot develops now that Unity is imploding. So instead of further developing my Unity game, I am putting what I’ve made so far on my site here: RobotSlayer. It’s just one small level and a bit janky, but it’s playable and you can run it in your browser. I’ve also put the code for it up on GitHub.
I have begun teaching myself C# game dev within Unity. After watching several tutorials and reading “Unity in Action” by Joseph Hocking, I started by creating an arena where I could test attaching scripts to game objects. I experimented with textures and skyboxes a bit. A script for player movement was easy enough to get working. Then I wrote a basic script (based on an example in the book) to have objects move through the environment while avoiding obstacles via Raycasting. That looked like this:
I'm teaching myself #gamedevelopment in Unity and this is my current experiment. I made a basic environment, so I could test a simple AI script that makes the boxes avoid the player and navigate away from walls. It's janky, but it's a start. #indiedevpic.twitter.com/m2PRvGvpPu
Next, I created a proper enemy, with a bit of basic AI. Using the code for avoiding obstacles, I added a function to cause the enemy to stalk the player through the environment and to attack when the player is spotted. Then I created a simple weapons system and armed the enemy and the player. (This also gave me a chance to play with Bloom post-processing to get the projectiles “glowy”.) This is where I’m at now:
Some progress on the #Unity game I'm making to teach myself #gamedev: I made an enemy with some limited AI that can track the player and avoid obstacles. I also created a weapons system. pic.twitter.com/3aZdY4gyvf
I wanted to play around with Sentiment Analysis of Tweets; specifically, I wanted to try the Python TextBlob library, which has a built-in function that performs text analysis to determine if a string has a positive or negative sentiment. After pondering a bit, I decided it would be fun to search for tweets that were created specifically within the city limits of Tuscaloosa, where I am currently attending school. I wrote a script that scrapes Twitter and returns tweets by geolocation, and then uses TextBlob on the results.
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 6 15:58:58 2022
@author: austin
"""
import snscrape.modules.twitter as sntwitter #Social Network Scraping Library
import pandas as pd #so I can make a dataframe of results
from textblob import TextBlob
import csv
import time
#Tuscaloosa = geocode:33.23726448661455,-87.58279011262114,20km
query = "geocode:33.23726448661455,-87.58279011262114,20km"
tweets = []
combinedtweets = []
limit = 10000000 #set a limit on how many results I want to pull
for tweet in sntwitter.TwitterSearchScraper(query).get_items():
if len(tweets) == limit:
break
else:
# set sentiment
text = tweet.content
analysis = TextBlob(text)
if analysis.sentiment.polarity >= 0:
sentiment = 'positive'
else:
sentiment = 'negative'
tweets.append([tweet.date, tweet.user.username, tweet.content, sentiment])
df = pd.DataFrame(tweets, columns=['Date', 'User','Tweet', 'Sentiment'])
df.to_csv('twitter_scrape_results.csv') #save dataframe as csv
print("\014") #clear console
time.sleep(10)
print("CSV Successfully Created")
The results were pretty interesting (I uploaded the dataset to Kaggle if anyone is interested). It seems sentiment stays roughly the same each year, hovering around 85% positive and 15% negative. I really would have thought negative sentiment would be much higher based on my personal observations of Twitter content: makes me wonder if Tuscaloosa is an unusually happy place, or if my Twitter observations are influenced by negative bias…
In any case, perhaps a more interesting bit of data is that the total amount of Tweets seems to decline quite a bit each year. This raises the question, why are Tuscaloosians tweeting less often? I put the results into this Tableau dashboard, which displays just how steady and steep a decline there has been.
×
Update:
I decided to test a hypothesis: perhaps the high level of positive tweet sentiment is due to the fact that this is a college town, and numerous tweets were posted by official University of Alabama departments? I used OpenRefine to filter out official UA accounts, which was easy enough to do since their usernames seem to either begin with “UA_” or end with “_UA”. Surprisingly though, that didn’t change the sentiment percentages at all. I now suspect that even if you factor in all official UA Twitter accounts, you would also have to factor for the fact that a large number of Tuscaloosians work for UA (45,000 employees). I know many of my professors post UA related content using their personal Twitter accounts, and by design this content will logically slant positive.
Developers tend to take their keyboards seriously. I have been using classic buckling spring IBM Model M computer keyboards since I first began programming. These are great to type on, and I still love them (kind of feels like typing on a typewriter), but I decided recently that I should upgrade to a compact keyboard that uses modern mechanical switches. This would give me more space on my desk, and allow for some customization. There seems to be an endless sea of options to choose from, though; the first step in my consumer journey is to narrow my options down to a few top brands, so what is a developer to do? I thought a good way to cut through the clutter would be to scrape the r/MechanicalKeyboards subreddit to see what brands are the most talked about currently. So I wrote this Python script that uses Reddit’s API to scrape the subreddit.
import praw
from praw.models import MoreComments
import datetime
import pandas as pd
# Lets use PRAW (a Python wrapper for the Reddit API)
reddit = praw.Reddit(client_id='', client_secret='', user_agent='')
# Scraping the posts
posts = reddit.subreddit('MechanicalKeyboards').hot(limit=None) # Sorted by hottest
posts_dict = {"Title": [], "Post Text": [], "Date":[],
"Score": [], "ID": [],
"Total Comments": [], "Post URL": []
}
comments_dict = {"Title": [], "Comment": [], "Date":[],
"Score": [], "ID": [], "Post URL": []
}
for post in posts:
# Title of each post
posts_dict["Title"].append(post.title)
# Text inside a post
posts_dict["Post Text"].append(post.selftext)
# Date of each post
dt = datetime.date.fromtimestamp(post.created_utc) # Convert UTC to DateTime
posts_dict["Date"].append(dt)
# The score of a post
posts_dict["Score"].append(post.score)
# Unique ID of each post
posts_dict["ID"].append(post.id)
# Total number of comments inside the post
posts_dict["Total Comments"].append(post.num_comments)
# URL of each post
posts_dict["Post URL"].append(post.url)
# Now we need to scrape the comments on the posts
id = post.id
submission = reddit.submission(id)
submission.comments.replace_more(limit=0) # Use replace_more to remove all MoreComments
# Use .list() method to also get the comments of the comments
for comment in submission.comments.list():
# Title of each post
comments_dict["Title"].append(post.title)
# The comment
comments_dict["Comment"].append(comment.body)
# Date of each comment
dt = datetime.date.fromtimestamp(comment.created_utc) # Convert UTC to DateTime
comments_dict["Date"].append(dt)
# The score of a comment
comments_dict["Score"].append(comment.score)
# Unique ID of each post
comments_dict["ID"].append(post.id)
# URL of each post
comments_dict["Post URL"].append(post.url)
# Saving the data in pandas dataframes
allPosts = pd.DataFrame(posts_dict)
allPosts
allComments = pd.DataFrame(comments_dict)
allComments
# Time to output everything to csv files
allPosts.to_csv("MechanicalKeyboards_Posts.csv", index=True)
allComments.to_csv("MechanicalKeyboards_Comments.csv", index=True)
Reddit limits API requests to 1000 posts, so the most current 1000 posts is my sample size. My code outputs two files: the last 1000 posts, and more importantly the comments on those 1000 posts, which ended up being 9042 rows of data. (I posted the files to Kaggle if anyone would like to play with them.) Then I imported my comments dataset into OpenRefine so I could run text filters to find brand names, and I recorded the number of mentions for each brand. Finally, using Tableau, I created a couple of Data Visualization charts to express my findings. Here are the most talked about keyboard brands on r/MechanicalKeyboards currently:
×
×
Update:
I decided to go with the Keychron keyboard that my research found to be the most discussed (and I also added Glorious Panda Switches and HK Gaming PBT Keycaps). Couldn’t be happier; it’s a pleasure to type on.
For a class project, I decided to reimagine Katsushika Hokusai’s seminal “Red Fuji” as ASCII art. Essentially, I wrote a Python script that converted Hokusai’s “Red Fuji” into text art using only the letters in Hokusai’s name. Since different letters have different visual weights, you can use them for gradations of shading. The script converts the image into black and white, then assigns a numerical value to each pixel depending on how dark the shading is. Then it replaces each pixel with the letter assigned to that shade range. I like how it turned out.
This was a fun electronics project that I did a few years ago. I hacked an old Nintendo NES R.O.B Robot to be controllable from a PC via a webpage (I ended up selling the robot to someone who wanted to use it in a commercial).
I used a Teensy micro-controller soldered to the robot’s original circuit board and a bit of code to interpret serial input. Then I made a PHP file that sends serial data to the Teensy’s port and made an gui with a bit of Bootstrap. Here is the code if you’d like to try this yourself: R.O.B. Robot Controller