Avatar
← Back to Portfolio

Harvest Hub: Verified Community Food Sharing

Harvest Hub is a small web app my team and I built during a 24 hour hackathon. People can list extra food, people nearby can quietly claim it, and after pickup we run a simple image check that compares the “before” and “after” photos using Supabase Storage and an OpenAI.

Team: Jadyn Worthington · Joseph Caballero · Dean Walston · Mercedes Mathews

Next.jsReactTypeScriptSupabasePostgreSQLOpenAITailwind CSSshadcn uiVercel
View Live Deployment →

What the Product Does

The goal was simple: move real food to real people and keep the system honest without a lot of manual oversight.

  • People(the public) post listings with a photo of surplus cooked food and a pickup window.
  • Community members claim portions from a feed of open listings.
  • After pickup, users upload a proof photo. The backend compares it to the original image and marks the claim as verified if the similarity score is high enough.
Listing page
Claim flow
Verification state

Tech Stack

Core

The app is built with Next.js, React, and TypeScript so the UI and APIs live in the same codebase. Supabase handles auth, Postgres, and private file storage. A single Next.js API route calls an OpenAI vision model to score how similar two images are.

Frontend

Tailwind CSS keeps layout and spacing fast to work with. shadcn ui gives us solid, accessible components, and lucide-react provides the icons. It is a small stack, but it let us get to “usable” quickly.

Platform

Everything is deployed on Vercel, which runs both the Next.js pages and the API routes in one deployment.

Background

The hackathon opened with three prompts: health advocacy, financial literacy, and food scarcity. We picked food scarcity and decided that if we were going to stay up all night, we wanted something that actually moved data end to end, not just a slide deck.

After the kickoff, we worked from about 11 PM to 8 AM building the core loop:

  1. People create listings with photos of surplus food.
  2. Community members claim the listings they want.
  3. Users upload a proof photo after pickup.
  4. The backend decides whether to verify the claim based on image similarity.

Why This Tech Stack

Next.js (web + API)

One stack for pages and backend code meant less setup and more time shipping features. File based routing gave us quick pages for listings and claims, and API routes let us keep all our logic close to the UI.

Supabase (backend + storage)

Supabase gave us auth, Postgres, and file storage with a single client. We created basic tables for publishers, listings, claims, and image_verifications.

Images live in a private bucket. The server generates short lived signed URLs whenever the verification route needs to read them, so we never expose bucket keys to the browser.

OpenAI (image verification)

For verification, we send the original listing image and the proof image to an OpenAI endpoint and use the similarity score it returns. If the score is high enough, the claim is marked verified; otherwise it is flagged. It is not perfect, but it was enough to show the idea working.

shadcn ui + lucide + Framer Motion

We used shadcn ui for forms, cards, and modals, and lucide for icons. Those libraries let us keep the interface clean without sinking hours into custom styling also using Framer gave good animations for ladning page.

Architecture Overview

At a high level:

  • Next.js on Vercel renders the UI and exposes API routes like /api/listings, /api/claims, and /api/verify.
  • Supabase manages auth, Postgres tables, and private storage buckets.
  • OpenAI provides the image similarity score when we need to verify a claim.
High level architecture diagram: Client → Next.js → Supabase Auth / DB / Storage → OpenAI
High level architecture — Client → Next.js → Supabase → OpenAI

API Design and Data Flow

1. Create listing – POST /api/listings

The route receives the listing form and image, uploads the image to a private bucket, and creates a row in listings with the image path and pickup window.

2. Create claim – POST /api/claims

When a user claims a listing, we insert a row in claims with status set to pending. This records intent but does not count as a finished pickup yet.

3. Upload proof and verify – POST /api/verify

After pickup, the user uploads a proof photo. The route stores that image, fetches both image paths, generates signed URLs, and calls the OpenAI model.

We store the score in image_verifications and update the claim to verified or flagged depending on the threshold.

Runtime View

The Next.js app runs on Vercel and talks to Supabase and OpenAI only through server side API routes. Writes for listings, claims, and verifications all go through those routes, which keeps the database and storage in sync. Photos are only accessible via signed URLs, and the OpenAI API key never leaves the server.

In short: Client → API routes → Supabase + OpenAI.

Key Technical Decisions

  • Use Next.js API routes for all writes so logic, storage, and DB access stay in one place.
  • Lean on Supabase for auth, Postgres, and file storage instead of wiring separate services.
  • Keep image verification and OpenAI calls strictly server side for security and easier iteration.
  • Use shadcn and lucide to get a clean UI without over-investing in custom design for a hackathon app.

Challenges & Learnings

Trust vs Simplicity

We did not have time for a full fraud system. A single similarity score and threshold felt like the right balance between something the judges could understand and something we could actually build that night.

File Uploads and Signed URLs

Most of the debugging time was spent on getting uploads into the right bucket, storing the paths correctly, and generating signed URLs only on the server.

Parallel Work Through Clear Endpoints

By agreeing on the shape of POST /api/listings, POST /api/claims, and POST /api/verify early, we were able to split the work between backend wiring and frontend flows without blocking each other.

Future: Scaling the Idea

Scalable architecture diagram: CDN → load balancer → Next.js instances → Supabase → queue → worker → model
How I would grow Harvest Hub beyond the hackathon version

The current build is a one night hackathon project, but the shape of the system can handle real traffic with a few deliberate upgrades. The target in my head is something like one hundred thousand users and around one million reads per day.

Web and API tier

I would keep Next.js but put it behind a CDN and a load balancer. Public listing pages are a good fit for edge caching so most read traffic never hits the app directly. The app itself would run as a few different Next.js instances that share the same environment and talk to the same Supabase project. Read endpoints are tuned to be cache friendly. Write endpoints stay small and transactional so each request either succeeds or fails clearly.

Supabase data and storage

On the Supabase side the work is mostly about discipline. I would add targeted indexes on listings, claims, and verification tables that match how the UI actually queries them. Verification rows can stay append only and analytics can read from materialized views or a small reporting table so heavy dashboards do not compete with core traffic. Storage already scales well for images, so the main concern is keeping bucket paths predictable and signed URL lifetimes reasonable.

Asynchronous image verification

The main change to the core loop would be verification. Right now the verify route calls the model directly. At scale I would turn that into a job.POST /api/verify would write a job into a queue and return quickly. A small worker service should pull jobs, fetch signed URLs from Supabase Storage, then call the vision model, and then write the score and status back into Postgres.

Availability, consistency, and CAP trade offs

In terms of CAP theorem, the internet will always give you network partitions at some point, so the real choice is how much consistency you are willing to trade for availability. For Harvest Hub I would be more towards being available: people should still be able to see listings and claim food even if a replica or the worker is having a bad day.

That means core writes like creating a listing or a claim stay strongly consistent, but secondary views are allowed to be consistent. A dashboard might be a few seconds behind. A verification status might take a small delay to flip from pending to verified. That is an acceptable trade if it keeps the app responsive in my opinion.

Reliability and safety

On top of that I would add edge caching for read heavy endpoints, rate limits on claim and verify routes.

The overall shape of the system stays the same: Next.js at the edge, Supabase as the main backend, and an async verification service on the side. The difference is that each piece is treated as something people rely on every day, not just something that has to survive a demo.