Skip to content
View manynames3's full-sized avatar

Block or report manynames3

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
manynames3/README.md

Aiden Rhaa

AWS Solutions Architect Associate · AWS Developer Associate · HashiCorp Terraform Associate

AWS SAA AWS DVA Terraform

LinkedIn Email


I design and deploy cloud infrastructure on AWS — choosing the right compute model, integration pattern, and data layer for what the workload actually requires. Sometimes that's serverless. Sometimes it's EC2 with an autoscaling group. The pinned repos below show the output. This is how I think before I write a single line of code.


It starts with the real problem, not the interesting problem

Before choosing a service or designing a schema, I work backwards from what the end user actually needs. What decision are they trying to make? What friction are they trying to avoid? The architecture flows from that — not from what sounds impressive or what I want to practice.

Strong framework first, iteration second

Good software doesn't start with a sloppy draft. It starts with a clear idea of what you're building and why, what the constraints are, and what tradeoffs you're willing to make. Iteration is necessary — but you need something worth iterating on. Thinking through cost, ease of use, failure modes, and scope before touching a keyboard isn't overthinking. It's just engineering.

AI tools accelerate execution, fundamentals prevent getting lost

I use Claude Code and Codex to move fast — generating boilerplate, debugging, iterating on implementations. But I don't use them as a crutch. Understanding what's actually happening in the code means I can course-correct when the output is wrong, ask better questions, and make judgment calls the model can't make for me. Speed without comprehension is just faster mistakes.

Every architectural decision has a reason

I don't default to the most powerful option or the most familiar one. I ask: what does this workload actually look like? Burst-heavy or steady-state? Single writer or concurrent? Latency-sensitive or async-tolerant? The answers determine the data layer, the integration pattern, and the provisioning model. The goal is the minimum surface area that solves the problem well — nothing bloated, nothing missing.

Pinned Loading

  1. pulpit-v2 pulpit-v2 Public

    Multi-tenant sermon search platform on Amazon EKS with Helm, ArgoCD, Prometheus, Grafana, and Bedrock.

    HTML

  2. clearpath-fargate-api clearpath-fargate-api Public

    Clearpath Lead Intelligence API on ECS Fargate, Aurora PostgreSQL, CloudFront, WAF, and Terraform

    HCL

  3. photoscribe-ai photoscribe-ai Public

    Serverless AI-powered semantic photo search platform using AWS Bedrock, S3 Vectors, Lambda, API Gateway, Terraform, React, and Cloudflare Pages.

    Python

  4. super-transcriber super-transcriber Public

    Cost-first serverless MP3 transcription app built with React, Terraform, Cognito, S3, Lambda, DynamoDB, and Amazon Transcribe.

    TypeScript

  5. market-scout market-scout Public

    Static real estate market intelligence app that turns Redfin market-tracker data into searchable, sortable US city, ZIP, and county comparisons via a Python build pipeline and Cloudflare Pages.

    HTML 1

  6. probate-bot probate-bot Public

    Playwright-based CLI + web UI for pulling public probate estate leads from Georgia county portals. Scores real-estate-relevant cases, syncs to SQLite with deduplication, and exports CSV/JSON. Inclu…

    Python 1