Vibe Coding in 2026 — Lessons from a Simple Experiment
Lessons and reflections from a simple vibe coding experiment in 2026—documenting what worked, what didn’t, and why, for my future self and anyone curious to follow along.
The project, sg-bus-api, consist of a geospatial REST API endpoint that efficiently finds the nearest bus stops in Singapore for any given coordinate. It leverages S2 Geometry for fast, hierarchical spatial indexing and RocksDB as a high-performance embedded key–value store for low-latency lookups.
1. Project Overview
For this project, I implemented a REST API using Gin (Golang) with RocksDB storing a preloaded dataset of bus stop locations, while S2 Geometry handled spatial indexing.
To achieve a self-contained and cost-efficient hosting setup, I used an embedded, read-only database, packaged the service into a single Docker image, and deployed it to serverless Google Cloud Run via an automated Google Cloud Build pipeline.
2. Key Technical Learnings
Vibe Coding as a Manager
I am using this as an opportunity to try out vibe coding as an "AI manager" new to the core technologies—RocksDB and S2 Geometry. With only a high-level conceptual understanding, I can’t provide detailed guidance to the AI, so it will be interesting to see how well it handles the work on its own.
I started by asking it to generate the API endpoint code using both technologies. The code it produced was functional and showed no obvious issues, but it was hard to read and did not meet the readability standards I usually apply to my own work.
The codebase looks difficult to maintain due to overly long functions, hardcoded values instead of reusable constants, and a lack of separation of concerns, with most of the logic confined to a single file.
That said, its correctness made it a solid starting point. With a few well-directed prompts, I was able to refactor the code into a cleaner, more structured, and maintainable implementation. It also generated the README.md on its own, which was a notable time saver.
Docker Image
To minimise the attack surface, I explored building both Scratch and Distroless images. I first built an image based on Distroless by compiling a dynamically linked binary against glibc, then created a Scratch-based image using a fully static binary compiled with musl.
RocksDB Version Mismatch
I ran into a RocksDB version mismatch while testing an Alpine-based image. The AI-generated code depends on gorocksdb v1.9.1, which targets RocksDB v9—the latest version available via Debian's package manager. Alpine’s package manager, however, is more up to date and provides RocksDB v11, leading to a build failure due to incompatibility.
Instead of pinning RocksDB to v9, I opted to resolve the mismatch by compiling v10—the latest version supported by grocksdb—from source. This approach turned out to be less straightforward than anticipated.
Google Cloud Build, Part 1
Using AI, I quickly generated commands for Docker to build RocksDB from source, which worked successfully in my local environment. However, I encountered additional challenges when attempting to run the same build process on Cloud Build.
The first issue I encountered was that the build took significantly longer and eventually timed out after an hour. I used AI again for suggestions, and it recommended introducing a cloudbuild.yaml file, increasing the timeout limit, and/or using a more powerful machine type to reduce build time.
Here's an example cloudbuild.yml,
timeout: 2h
options:
machineType: E2_HIGHCPU_8
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.']
I included the cloudbuild.yaml file and triggered another build, but did not see any significant improvement. I also could not confirm whether the machine type had been updated, since it isn't shown in the build summary. I later learned that this is the expected behaviour: when the default machine type is used, the machine type field is omitted entirely from the build summary.
I continued experimenting by running additional builds on progressively more powerful machine types, including E2_HIGHCPU_16 and E2_HIGHCPU_32, but still observed no significant improvements.
I turned to AI again for suggestions. This led me down a rabbit hole of optimisations, including using even more powerful machines for building RocksDB (N2_HIGHCPU_16 and C2_HIGHCPU_16), switching to the Ninja build system (-G Ninja), etc. I observed some minor improvements, in hindsight likely due to fluctuating server load at different times, but no meaningful performance gains.
At this point, I felt like a dog chasing its own tail, so I took a step back and refocused on the core question: whether the cloudbuild.yml file was actually being picked up. The build summary did not reflect the defined configuration, making verification more difficult.
I thought more deeply about how I could verify it and considered an alternative approach: if a more powerful machine was actually being used, it should show up in the billing. However, upon checking, there were no additional charges, indicating that the new configuration wasn’t being applied.
Once the confusion cleared up, I was able to quickly resolve the issue. I started by asking AI whether repository event triggers use the cloudbuild.yml file, and it said no, suggesting that I trigger the build via the CLI instead. While this is only partially correct—depending on the settings—the suggestion helped steer me in the right direction. I was working on a new computer without Google Cloud CLI installed and, to avoid setting it up, I relied solely on repository event triggers, assuming that would be sufficient for a simple project.
I felt that manually triggering builds via the CLI isn’t a maintainable approach, so I revisited the build trigger configuration. That’s when I found the root cause: the trigger was set to use inline configuration instead of reading from the repository, which explained why my cloudbuild.yml wasn’t being picked up. Relying too heavily on AI while rushing through the process caused me to overlook a critical step: switching the configuration from inline to repository in the settings.
AI is a useful tool—a capable servant—but a poor master when used without understanding or critical thinking. This experience reinforced the limits of AI-assisted workflows: while it’s effective at generating code and suggesting solutions, it doesn’t possess full awareness of a specific system’s context or configuration, and can’t reliably catch issues that depend on those nuances. In this case, it couldn’t identify a configuration mismatch that required awareness of how the trigger was defined.
The key lesson is that contextual information, often assumed and omitted, can be critical when debugging. Writing a clear, effective issue description (or bug report)—especially one that provides sufficient context—is essential for successful troubleshooting, whether you’re working with AI assistants or human teams.
Google Cloud Build, Part 2
After successfully building RocksDB from source, the build process continued until it encountered another error.
Step 12/24 : COPY --from=app-builder /lib/*/librocksdb.so* /usr/local/lib/
COPY failed: no source files were specified
This looks odd—Docker is designed to eliminate “it works on my machine” issues by standardising the environment. There were no errors prior to this step, so something unexpected is happening.
The copy commands shown below are fully AI-generated. Although the first line is inconsistent with the rest, I resisted the temptation to manually adjust it for consistency. Since it builds and works correctly locally, I’ve left it unchanged.
COPY --from=app-builder /lib/*/librocksdb.so* /usr/local/lib/
COPY --from=app-builder /usr/lib/*/liblz4.so* /usr/local/lib/
COPY --from=app-builder /usr/lib/*/libzstd.so* /usr/local/lib/
COPY --from=app-builder /usr/lib/*/libbz2.so* /usr/local/lib/
COPY --from=app-builder /usr/lib/*/libgflags.so* /usr/local/lib/
COPY --from=app-builder /usr/lib/*/libxxhash.so* /usr/local/lib/
So I did a quick check and see that /lib is simply a symbolic link.
16 0.093 lrwxrwxrwx 1 root root 7 Mar 2 21:50 /lib -> usr/lib
As a cybersecurity professional, I am aware that symlink-based attacks can occur during the Docker image build process, where symbolic links may be abused to access or overwrite unintended files within the container or the underlying filesystem. I therefore suspect that Google Cloud Build may be overly aggressive in blocking symlinks. So I replaced the symlink with the actual path to align it with the other copy commands, which resolved the issue.
Although this was a minor issue, it opened up a broader set of considerations around symlink-related attack vectors, serving as a reminder of the underlying complexity of software systems.
RocksDB Runtime Error
During the process of creating a secure production Docker image, I also encountered a RocksDB runtime error.
panic: open rocksdb at "./data/rocksdb": IO error: While renaming a file to ./data/rocksdb/LOG.old.xxxxxxxxxxxxxxxx: ./data/rocksdb/LOG: Permission denied
This issue occurs because the nonroot user is trying to write to a directory owned by root. The service has been switched to run as nonroot to improve security, but the directory permissions were not updated accordingly.
I tried deleting the LOG.old.xxx file after seeding to see if it resolves the issue.
RUN go run seed.go
RUN rm -f /app/data/rocksdb/LOG*
However, the issue persists because the application still requires write permissions to the directory at runtime in order to create a new LOG file.
panic: open rocksdb at "./data/rocksdb": IO error: While open a file for appending: ./data/rocksdb/LOG: Permission denied
Since I am using RocksDB as a simple read-only database and have no need for any log data, I looked into preventing the logs from being generated in the first place. I noticed the OpenDbForReadOnly function in the RocksDB wrapper, so I tried switching to that instead. Unfortunately, this does not prevent the creation of the LOG file.
As these are internal logs with no disable switch and no straightforward way to prevent their creation, the simplest solution is to adjust the directory permissions instead. I asked AI to address this, but it appears to have a limited understanding of distroless containers. It kept suggesting chown and chmod commands, which are not available in hardened, minimal container environments such as distroless and scratch images, where non-essential tools like shells, package managers, and basic utilities are removed.
To arrive at the answer I was looking for, it took many rounds of back-and-forth prompting. It also helps to remind it, if it gets stuck, that distroless images do not include common Linux utilities such as chown.
COPY --chown=65532:65532 --from=app-builder /app/data/rocksdb /data/rocksdb
Note: As of 17 April 2026, popular AI systems appear increasingly capable of producing the right answer in just one or two prompts.
Building a Scratch Image
Building a scratch image wasn’t as complicated as I initially thought. I was told by AI that it would be much harder and more complex than building a distroless image, especially when using musl instead of gcc, but it turned out to be quite straightforward. There were a few minor issues, but they weren’t difficult to fix.
Quick Note: On Alpine, building RocksDB requires both the development package (e.g. snappy-dev) and the static package (e.g. snappy-static). In contrast, Debian-based systems typically provide just the development package (e.g. libsnappy-dev), without a separate static package.
3. Final Thoughts
In an ideal scenario where everything works flawlessly, building a simple REST API would take no more than 15 minutes—5 minutes for AI to generate the code, 5 minutes to deploy it, and 5 minutes to test it.
In reality, however, software is inherently complex—what looks simple on the surface can quickly become complicated when things don’t behave as expected. Keep in mind that AI can still make mistakes, such as suggesting outdated deprecated code and flags, or providing incorrect information and advice. I left out various other AI mistakes to focus on the more interesting ones.
To conclude, critical thinking and a solid grasp of fundamentals remain essential for effective problem-solving, especially when evaluating or verifying AI-generated code, commands, and troubleshooting advice.