A Poetic Twitter Bot

What Is It?

I have always been interested in the fusion of art and technology. My goal was to find a project that would allow me to dabble with a combination of these two elements.

haikubot.exe is a Twitter bot that procedurally generates and tweets out completely original haikus. Each poem is grammatically correct and follows the traditional 5-7-5 syllable haiku structure. 

Written entirely in Python 3, haikubot.exe uses Markov chains and Twitter API to generate and tweet original poems autonomously.

haikubot.exe has a sizable vocabulary consisting of exactly 2,000 words. More specifically, a vocabulary of the 2,000 most common words used in contemporary poetry.

Those 2,000 words were then sorted by their part of speech and syllable counts, and later saved into a collection of text files. The bot's algorithm determines what part of speech and syllable count is needed with every word selection, and will pull a random word meeting the requirements from the corresponding .txt file.


The live version of this bot can be found at @haiku_exe.

As long as the program is running, haiku.exe will tweet a new haiku every 10 seconds, until of course it gets temporarily post-blocked for spamming. :-) 

A* Path Planner: CAP 4053 Artificial Intelligence for Computer Games Project 2020

What Is It?

Path Planner uses the A* search algorithm to find the most optimal path from a given "start" tile to any specified "goal" tile. The project utilizes hexagonal tile maps as opposed to generic square tile maps, and is based in C++. 

Students were provided with a blank base project that included both a hexagonal tile map API and GUI. 

I was tasked with creating an implementation of the A* search algorithm that consistently outperforms the provided example implementation in terms of both speed and solution optimality.

I started by iterating through the provided tile map array and creating an "adjacency list", or a vector of all immediately-neighboring tiles. I then mapped each tile to its corresponding adjacency list via an unordered map. By using an unordered map, I was able to reduce computation time from O(log n) to O(1). I also created an unordered "visited" map, which held tile object references as to reduce memory usage.

Because my implementation of A* utilized an admissible heuristic, every solution it generates is guaranteed to be the most optimal. 

The A* search algorithm in-action, with tile drawing toggled on and slow-motion enabled. 

A visual representation of the provided tile map array.

This project was an excellent challenge in terms of code optimization and profiling. I had to employ basically all of my prior knowledge of data structures and time complexity so that I could consistently "beat" the benchmark in both speed and solution optimality. Ultimately, I found that relying on certain C++ 11 standard library features and helper functions related to std::priority_queue and std::map often negatively impacted performance. 

"Build the best - Destroy the rest!" Robot Fighter AI

What Is It?

Robocode is an open source programming game, where the goal is to develop a robot battle tank to battle against other tanks. My bot's source code was based in C#.

I created a bot using the Robocode API that operates using a simple finite state machine. To do this, I needed to implement a custom finite state machine class. 

My FSM class contains a <Condition, State> dictionary, which maps each of Eric.exe’s states to the corresponding event triggers. This structure has made the addition of new states a trivial process. That is, all that is required to create a new state is a new entry in said map.

I also created some helper functions to access the dictionary to determine which state to swap to, given a specific trigger condition. A more detailed explanation of Eric.exe's state triggers are shown in a diagram below.

My bot, Eric.exe (pink), in a 1v1 battle with the provided benchmark bot (blue).

A diagram of Eric_exe's state machine.

At the beginning of each round, my bot starts out in a general “seeking” state, where its radar spins indefinitely until a bot is detected on the radar. Eric.exe then verifies that the scanned bot is indeed an enemy. If the scanned bot is a teammate, this triggers a transition back to the Move & Seek state, in which case, this process will be repeated. 

When Eric.exe successfully detects an enemy bot, a transition to the Moving Shooting state occurs. It is worth mentioning that a transition from Move & Seek to Stationary Shooting does not exist. That is, Eric.exe will always begin with a mobilized attack when first discovering an enemy. 

This AI utilizes the stop-and-go movement technique as described here. After detecting a drop in enemy energy level within the threshold of 0.1 - 3, Eric.exe will perceive this event as enemy firing. As a result, the AI will enter the Stationary Shooting state, where it halts all movement until a subsequent shot is fired by the enemy. Eric.exe will then resume moving forward, attempting to maintain an angle perpendicular to the enemy at all times. This dodging strategy has turned out to be incredibly effective against the benchmark’s simple targeting technique. I got the idea to maintain a perpendicular position from this basic movement article.

Adaptive Curves: Computational Structures in Computer Graphics (CAP4730) Project 2021

What Is It?

This was a project for my Computer Graphics course based in C++ and OpenGL. The goal of the project was to gain a greater familiarity with concepts such as subdivision, Bezier curves, De Casteljau's algorithm, and more. 

A demo of my picking, subdivision, and adaptive Bezier curves in action. 

I spawned 8 points in 3D space and connected them with a line loop to generate a polygon. I then implemented a subdivision algorithm that allows users to render a smoother version (blue) of the initial polygon with each keypress.

Finally, I implemented a form of De Casteljau's algorithm to generate N BB pieces and ultimately form closed Bezier curve (yellow) using the initial 8 points as control points.

All curves are adaptable in real-time by picking and dragging the green control points.

This was definitely a difficult project in the sense that I had never before used the OpenGL API and I think there's a pretty steep learning curve associated with it. There's a lot to remember when it comes to the graphics pipeline, such how to generate VAO/VBOs, binding the correct buffer, drawing the correct number of vertices from the correct starting point in the buffer, etc. Overall, I feel like I now have a more thorough understanding of how the GPU parses vertex information and how shaders work under the hood. 

A closer look at what's going on behind the scenes with De Casteljau's algorithm!