Compare commits

..

5 Commits

Author SHA1 Message Date
85017c2ddc Normal dashes 2025-08-28 10:52:34 -04:00
a79019fc9f Revised resume points 2025-07-31 13:29:28 -04:00
69b3c99e6f Redo project points 2025-07-31 12:46:25 -04:00
6673b67cd8 Draft AML project/refined Red Hat points 2025-07-30 14:41:12 -04:00
00dd6b77e9 Gobcog/spotify-vis 2025-01-02 10:39:35 -05:00
7 changed files with 109 additions and 175 deletions

1
.gitignore vendored
View File

@@ -4,6 +4,7 @@ resources/_gen/
themes/base16* themes/base16*
*.pdf *.pdf
*pt*
commit-msg.txt commit-msg.txt
.hugo_build.lock .hugo_build.lock

View File

@@ -104,7 +104,7 @@ body {
background-color: $background-color; background-color: $background-color;
color: $color; color: $color;
// line-height: 1.5; // line-height: 1.5;
line-height: 1.59; line-height: 1.57;
// font-size: 100%; // font-size: 100%;
// font-size: 15px; // font-size: 15px;
font-size: 17px; font-size: 17px;
@@ -578,7 +578,7 @@ header {// {{{
// } // }
}// }}} }// }}}
h2 {// {{{ h2 {// {{{
//color: $base-orange; color: $base-orange;
margin-top: .5rem; margin-top: .5rem;
font-size: 1em; font-size: 1em;
@@ -639,25 +639,15 @@ header {// {{{
} }
.project-header { .project-header {
display: flex; // margin-bottom: .6em;
align-items: baseline; // margin-bottom: .1em;
justify-content: space-between;
margin-bottom: 5px; margin-bottom: 5px;
} }
.project-date {
margin-left: 1em;
}
.project-title { .project-title {
// color: $base-blue;
color: black;
display: inline; display: inline;
margin-right: 0.5em;
}
.project-title span {
display: inline;
margin-left: 0.5em;
font-weight: normal;
} }
.project-link { .project-link {
@@ -719,10 +709,8 @@ header {// {{{
} }
.languages { .languages {
font-weight: normal; // font-style: italic;
font-style: normal; // font-size: .9em;
margin-left: 0.5em;
color: $base03;
} }
.institution { .institution {

View File

@@ -3,85 +3,81 @@ title: "Resume"
date: 2019-02-11T07:50:51-05:00 date: 2019-02-11T07:50:51-05:00
draft: false draft: false
--- ---
{{% resume/section "Work Experience" %}}<!--- {{{ --> {{% resume/section projects %}}<!--- {{{ -->
{{% resume/work-experience name="Red Hat" <!--- RBC AML {{{ -->
title="Cloud/Software Engineer Intern" languages="Kubernetes, GoLang, Jenkins" date="May 2022 — Aug 2023" %}}
- **Provided Tier 1 and Tier 2 support**, resolving {{% resume/project name="AML Risk Analytics"
user-reported issues with CI/CD pipelines and Kubernetes languages="Python, SQL, Tableau"
environments, resulting in a **40% faster average response date="July 2025" show="true" %}}
time**.
- **Diagnosed and resolved 80% of configuration
errors** in Kubernetes deployments by automating data
fetching and validation, **reducing system downtime
by 40%** and improving reliability for end-users.
- **Reduced deployment-related support tickets by 66%**
by developing a CLI tool to automate Kubernetes
binary deployments, with documented troubleshooting
procedures that cut resolution time from 45 to 15
minutes.
- **Decreased configuration error escalations by 30%**
through dynamic probe defaults and created knowledge
base articles enabling Tier 1 support to resolve most
probe issues independently.
- **Authored clear, user-friendly documentation** that
translated complex technical processes into
step-by-step guides, **accelerating onboarding by
50%** and enabling non-technical stakeholders to
self-serve.
- **Collaborated with QA and DevOps teams** to document
root causes of startup failures in legacy systems,
implementing dynamic probes that **cut production
launch issues by 50%**.
{{% /resume/section %}}<!--- }}} --> * Built an end-to-end AML simulation using **Python**,
generating **9M+ records** across customers,
{{% resume/section "Web Dev Projects" %}}<!--- {{{ --> transactions, and alerts to mimic real-world
financial behavior and suspicious activity patterns.
<!--- AWS {{{ --> * Wrote advanced **SQL (CTEs + joins)** to classify
**high-risk customers**, calculate alert counts, and
{{% resume/project name="AWS Server" filter transactions over the past 90 days with
url="https://kevin-mok.com/server/" languages="AWS, Kubernetes, Docker, Terraform" date="May 2024" show="true" %}} aggregated metrics.
* Engineered a **risk scoring model** in Python
- **Deployed and maintained multiple web applications** using transaction thresholds and alert volume to
using **Docker Compose** on **AWS EC2 Debian/Linux servers**, classify customers as Elevated or Critical risk.
ensuring consistent environments for applications handling * Designed **interactive Tableau dashboards** (Risk
**over 2,000+ monthly requests**. Heatmap, Alert Efficiency, Risk vs. Avg Amount) to
- **Built a uptime monitoring system** by writing a visualize cross-country AML exposure and alert
JavaScript script and setting up a systemd effectiveness.
service/timer to check and display page uptime, - **Developed KPI-ready metrics** (alert rate, avg USD
**ensuring near real-time monitoring and reducing downtime exposure, transaction volume) to drive AML
time by 95%**. performance reporting and enable cross-country risk
- **Automated AWS infrastructure provisioning** by writing comparisons.
**Terraform** files to deploy AWS EC2 instances and Docker - **Normalized multi-currency transaction data** to
containers, **accelerating deployment times by 80%** and ensure consistent exposure calculations across USD,
providing an easily reproducible infrastructure setup. CAD, and EUR, supporting reliable AML metric
- **Improved web application accessibility** by aggregation.
configuring **AWS Route 53**s DNS and **NGINX** to route
subdomains to individual web apps, **enabling seamless
navigation between apps**.
{{% /resume/project %}} {{% /resume/project %}}
<!--- AWS }}} --> <!--- RBC AML }}} -->
<!--- Spotify Visualized {{{ -->
{{% resume/project name="Spotify Visualized"
url="https://github.com/Kevin-Mok/astronofty" languages="Python, Django" date="June 2023"
show="true" %}}
- **Built a high-performance Python backend** using
Django and PostgreSQL to process 10K+ data records
per user, optimizing ingestion pipelines via API
integration and ORM modeling.
- **Engineered normalized database schemas** to
streamline query workflows, achieving a **50%
reduction in PostgreSQL latency** for high-volume
reporting tasks.
- **Visualized user music libraries in Tableau**,
creating dashboards that grouped tracks by **artist
and genre**, enabling users to explore listening
patterns and discover trends in their Spotify data.
{{% /resume/project %}}
<!--- Spotify Visualized }}} -->
<!--- Rarity Surf {{{ --> <!--- Rarity Surf {{{ -->
{{% resume/project name="Rarity Surf" {{% resume/project name="Rarity Surf"
languages="TypeScript, JavaScript, Node.js, React" languages="Python, Django, JavaScript, React"
date="March 2025" show="true" %}} date="Oct 2022" show="true" %}}
- **Provided direct user support** for a live NFT analytics - **Built a full-stack reporting tool** using React,
platform, resolving front-end filtering bugs and API Django, and **PostgreSQL** to analyze
integration issues in real-time. structured/unstructured metadata from APIs, enabling
- **Developed a full-stack web application with PostgreSQL database** to analyze NFT rarity rankings, increasing market research efficiency by 80%. real-time rarity scoring and improving insight
- **Translated user requests into technical features**, implementing real-time PostgreSQL-powered filters that improved usability for non-technical traders. delivery by **80%**.
- **Debugged and optimized API performance** under - **Optimized SQL query performance** within a
load, reducing latency by 50% and enabling **3,000+ Django-based pipeline, processing NFT ranking data at
concurrent users** to filter NFT data scale and exposing results via GraphQL with
seamlessly, addressing real-time customer usability **low-latency response times under high concurrency
issues. (2,000+ queries)**.
{{% /resume/project %}} {{% /resume/project %}}
@@ -89,25 +85,46 @@ date="March 2025" show="true" %}}
{{% /resume/section %}}<!--- }}} --> {{% /resume/section %}}<!--- }}} -->
{{% resume/section "Work Experience" %}}<!--- {{{ -->
{{% resume/work-experience name="Red Hat"
title="Cloud/Software Engineer Intern" languages="Kubernetes, GoLang, Jenkins" date="May 2022 - Aug 2023" %}}
- **Decreased manual configuration errors by 80%** by
automating service discovery and dynamic config
updates, aligning with AML goals of minimizing
operational risk and improving data integrity.
- **Enhanced CI pipeline reproducibility and
performance** by rewriting the Jenkins nightly
pipeline to support automated PR-level testing with
reusable parameters, improving report consistency
across environments.
- **Collaborated cross-functionally** with developers
and testers to maintain reliable infrastructure,
echoing the AML role's emphasis on stakeholder
partnership for building robust reporting systems.
- **Improved system reliability** during production
launches by implementing startup probes for legacy
services, reducing downtime and enhancing stability
for automated monitoring/reporting pipelines.
- **Reduced reporting deployment time by 66%** by
building a CLI-based solution to push compiled
binaries directly into Kubernetes/Openshift clusters,
accelerating turnaround for testing and data
validation workflows.
{{% /resume/section %}}<!--- }}} -->
{{% resume/section skills %}}<!--- {{{ --> {{% resume/section skills %}}<!--- {{{ -->
- **IT Support Skills**: Tier 1/2 Troubleshooting, Incident **Python**, **SQL**, **PostgreSQL**, **Tableau**, **MongoDB**, **JavaScript**, Django, **React**, Bash, **Git**, **Linux**, **Command Line**, Go(Lang), AWS, Kubernetes, Terraform, Docker (Compose), Jenkins, Groovy, Solidity, C
Response, Jira, Microsoft 365, VPN, Log Analysis, Knowledge Base Writing, Root Cause
Investigation
- **Customer Support**: Cross-team Collaboration,
Communication, User Training, Documentation
- **Programming Languages**: Python, Go, JavaScript,
TypeScript
- **Web Development**: AWS, PostgreSQL, Linux, React, Django
{{% /resume/section %}}<!--- }}} --> {{% /resume/section %}}<!--- }}} -->
{{% resume/section education %}}<!--- {{{ --> {{% resume/section education %}}<!--- {{{ -->
{{% resume/education name="University of Toronto (St. George)" {{% resume/education name="University of Toronto (St. George)"
title="Computer Science Specialist 3.84 GPA (CS). Graduated with High Distinction." date="2019 2024" %}} title="Computer Science Specialist - 3.84 GPA (CS). Graduated with High Distinction." date="2019 - 2024" %}}
- **Relevant Coursework**: Computer Networking, Databases, Operating Systems
{{% /resume/section %}}<!--- }}} --> {{% /resume/section %}}<!--- }}} -->

View File

@@ -1,39 +0,0 @@
# ME Sniper
write me a resume section similar to this (just a bit longer) for a web dev resume based on the points after with made up statistics
## Old
- **Developed a full-stack web application** to generate rarity
rankings for NFT's integrated with leading NFT
marketplace's (OpenSea) API,
enabling users to **quickly identify rare NFT's** and check
their listing status, **improving market research efficiency by 80%**.
- **Architected a robust Django (Python) [backend](https://github.com/Kevin-Mok/rarity-surf)** to fetch and process
NFT metadata from IPFS, store rarity rankings in
**PostgreSQL**, and serve the data via GraphQL API, **ensuring low-latency access and scaling to handle 2,000+ concurrent requests**.
- **Developed a dynamic React (Javascript)
[frontend](https://github.com/Kevin-Mok/rarity-surf-frontend)** using hooks to load
rarity data in real-time, styled with Tailwind for
mobile responsiveness, **improving user experience
and reducing frontend load times by 70%**.
## New
- Developed a full-stack web application to generate rarity rankings for NFTs integrated with leading NFT marketplaces (Magic
Eden) API, enabling users to quickly identify rare NFTs and check their listing status, improving market research efficiency by 80%.
- fetch metadata from either IPFS or website in parallel processes to create rarity
rankings as soon as metadata revealed
- reverse engineered algorithm for rarity rankings for NFT's based on article from
marketplace about their in-house statistical rarity
ranking
- created Prisma schema for PostgreSQL for database to store NFT data
- Node.js backend with API endpoints to return NFT's based
on max rank/price along with rarest traits
- lowest prices for rarity percentile to see if good deal
- fetch all listings from leading marketplace (Magic Eden) to be
able to identify which rare NFT's are on sale and be able
to filter based on max price/filter
- store previous sales data to check whether a buy at rarity
percentile is a good deal
- React FE to dynamically load NFT's based on rarity
rank/price filter with ability to hide seen ones
- Discord bot to notify you when customizable profitable resale
opportunity comes up based on rarity level/price

View File

@@ -52,41 +52,6 @@ date="Oct 2021" show="true" %}}
<!--- Rarity Surf }}} --> <!--- Rarity Surf }}} -->
<!--- Rarity Surf {{{ -->
{{% resume/project name="Rarity Surf (2)"
languages="Typescript, Node.js, React"
date="" show="true" %}}
- **Developed a full-stack web application** to generate
rarity rankings for NFT's, integrating with **leading
marketplaces API** to enable users to quickly identify
rare NFT's and check their listing status, **improving
market research efficiency by 80%**.
- **Built a scalable Node.js backend** with REST API
endpoints to return NFTs based on customizable filters
such as max rank, price, and rarest traits. **Optimized
performance** to handle **3,000+ concurrent requests** by
implementing efficient data fetching and caching
mechanisms, ensuring low-latency access to NFT data.
- **Developed a dynamic React frontend** to load and display
NFT's in real-time based on user-defined filters to
streamline browsing. Styled the interface using **Tailwind
CSS** for a responsive and modern design, **reducing
frontend load times by 50%**.
- **Developed a Discord bot** to notify users of profitable
resale opportunities by leveraging historical sales data
to assess deal quality. This feature **increased user
engagement by 80%** and provided a seamless way for users
to stay updated on market opportunities.
- Designed and implemented a **PostgreSQL schema** for to
efficiently store NFT data, including metadata, rarity
scores, and historical sales data.
{{% /resume/project %}}
<!--- Rarity Surf }}} -->
<!--- Astronofty {{{ --> <!--- Astronofty {{{ -->
{{% resume/project name="Astronofty" {{% resume/project name="Astronofty"
@@ -177,6 +142,7 @@ url="https://kevin-mok.com/server/" languages="AWS, Kubernetes, Docker, Terrafor
<!--- AWS 3 }}} --> <!--- AWS 3 }}} -->
<!--- Astronofty (extended) {{{ --> <!--- Astronofty (extended) {{{ -->
{{% resume/project name="Astronofty" {{% resume/project name="Astronofty"

View File

@@ -2,8 +2,9 @@
<div class="row project-header"> <div class="row project-header">
<div class="col-8 text-left"> <div class="col-8 text-left">
<h2 class="project-title"> <h2 class="project-title">
{{ .Get "name" }} <span class="languages">&lt;{{ .Get "languages" }}&gt;</span> {{ .Get "name" }}
</h2> </h2>
<span><{{ .Get "languages" }}></span>
</div> </div>
<div class="col-4 text-right date">{{ .Get "date" }}</div> <div class="col-4 text-right date">{{ .Get "date" }}</div>
</div> </div>