All posts

How I Structure Go Microservices (And Why Every Layer Earns Its Place)

·7 min read

There's a moment every Go developer hits — usually around the third time they've copy-pasted the same database setup code into a new service — where they think: there has to be a better way to organise this.

This post walks through the architecture I've settled on after building production microservices in Go. It's not a framework. It's not a boilerplate you clone and forget. It's a set of deliberate decisions that make your code easier to read, test, and change — whether you're a junior developer writing your first service or a tech lead reviewing your tenth.


The One Rule Everything Else Follows

Before any folder structure, any pattern, any ORM — there's one rule:

Dependencies flow inward. Never outward.

Handler → Service → Repository → Model

The handler knows about the service. The service knows about the repository. The repository knows about the model. Nothing flows the other way. Your model doesn't know HTTP exists. Your repository doesn't know what a gRPC request looks like.

When you follow this rule consistently, you get something valuable for free: you can swap any layer without touching the others. Change your ORM? Only the repository layer changes. Add a CLI? It reuses the service layer untouched. Switch from HTTP to gRPC? The service doesn't notice.


The Folder Structure

Here's what the full project looks like:

my-service/
├── cmd/
│   ├── server/main.go     ← starts the gRPC servers
│   └── cli/main.go        ← cobra CLI entry point
├── internal/
│   ├── service.go         ← Config + App wired in one place
│   ├── model/             ← pure domain structs (bun tags)
│   ├── dto/               ← request/response structs
│   ├── service/           ← business logic
│   ├── repository/        ← data access interfaces + postgres impls
│   ├── handler/grpc/      ← gRPC handlers
│   ├── database/          ← DB interface, driver factory, migrations
│   └── console/           ← cobra commands
└── proto/

Two entry points, one shared internal/ package. Both cmd/server and cmd/cli call internal.New(logger) — the same wiring, guaranteed.


Layer by Layer

The Model — your data, nothing else

type Product struct {
    bun.BaseModel `bun:"table:products,alias:p"`

    ID        string     `bun:"id,pk"`
    Name      string     `bun:"name,notnull"`
    SKU       string     `bun:"sku,unique,notnull"`
    Price     float64    `bun:"price,notnull"`
    DeletedAt *time.Time `bun:"deleted_at,soft_delete,nullzero"`
}

Notice what's missing: no json tags, no HTTP concerns, no validation logic. The model is a Go struct that maps to a database table. That's its entire job.

Soft deletes come for free with the soft_delete tag — bun automatically filters deleted records and sets deleted_at on delete calls.


The Repository — hide your database

The repository has two parts: an interface, and an implementation behind it.

// The interface — what the service depends on
type ProductRepository interface {
    FindByID(ctx context.Context, id string) (*model.Product, error)
    Create(ctx context.Context, p *model.Product) error
    ListByOrg(ctx context.Context, orgID string) ([]*model.Product, error)
    Delete(ctx context.Context, id string) error
}
// The implementation — the only place bun is used
func (r *productRepo) FindByID(ctx context.Context, id string) (*model.Product, error) {
    p := new(model.Product)
    err := r.db.NewSelect().Model(p).Where("p.id = ?", id).Scan(ctx)
    if errors.Is(err, sql.ErrNoRows) { return nil, nil }
    return p, err
}

The service only ever sees the interface. This is what makes unit testing painless — swap the postgres implementation for an in-memory mock and your service tests run without a database.


The DTO — what crosses the wire

DTOs are the translators between the outside world and your domain. They live at the boundary.

type CreateProductRequest struct {
    Name  string  `json:"name"  validate:"required,min=2"`
    SKU   string  `json:"sku"   validate:"required,alphanum"`
    Price float64 `json:"price" validate:"required,gt=0"`
    OrgID string  `json:"org_id" validate:"required,uuid4"`
}

A mapper converts between model and DTO so neither leaks into the other's layer:

func ToProductResponse(p *model.Product) *ProductResponse {
    return &ProductResponse{
        ID: p.ID, Name: p.Name, SKU: p.SKU, Price: p.Price,
    }
}

This feels like extra code the first time you write it. By the sixth time you've changed your API response shape without touching a single model file, it starts feeling like the best decision you made.


The Service — where business logic lives

The service is the heart of the application. It knows nothing about HTTP, gRPC, or databases — only about your domain rules.

func (s *productService) Create(ctx context.Context, req *dto.CreateProductRequest) (*dto.ProductResponse, error) {
    p := &model.Product{
        ID:    uuid.New().String(),
        Name:  req.Name,
        SKU:   req.SKU,
        Price: req.Price,
        OrgID: req.OrgID,
    }
    if err := s.repo.Create(ctx, p); err != nil {
        return nil, fmt.Errorf("create product: %w", err)
    }
    return dto.ToProductResponse(p), nil
}

It accepts a DTO, works with a model, calls the repository interface, returns a DTO. Clean in, clean out.


The Handler — the thinnest possible layer

The gRPC handler does one thing: translate proto types into DTOs and call the service.

func (h *ProductGRPCHandler) CreateProduct(
    ctx context.Context, req *pb.CreateProductRequest,
) (*pb.ProductResponse, error) {
    dtoReq := &dto.CreateProductRequest{
        Name: req.Name, SKU: req.Sku, Price: float64(req.Price),
    }
    resp, err := h.svc.Create(ctx, dtoReq)
    if err != nil {
        return nil, status.Errorf(codes.Internal, err.Error())
    }
    return &pb.ProductResponse{Id: resp.ID, Name: resp.Name}, nil
}

If you ever add an HTTP handler later, it calls the exact same service method. The business logic doesn't move.


The Bootstrap — one file to wire them all

This is the part most architectures get wrong. Dependency injection gets scattered across main.go, or worse, across init() functions. Here it all lives in internal/service.go:

func New(logger *slog.Logger) (*App, error) {
    cfg, err := NewConfig()   // reads env vars
    db,  err := database.New(database.Config{Driver: cfg.ORMDriver, ...})

    productRepo := postgresRepo.NewProductRepository(db)
    productSvc  := service.NewProductService(productRepo, db)

    gs := grpc.NewServer()
    pb.RegisterProductServiceServer(gs, grpcHandler.NewProductGRPCHandler(productSvc))

    return &App{ProductService: productSvc, grpcServer: gs, ...}, nil
}

And cmd/server/main.go becomes just this:

func main() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    app, err := internal.New(logger)
    if err != nil { logger.Error("bootstrap failed", "err", err); os.Exit(1) }
    defer app.Close()
    app.ServeGRPC()
}

main.go configures the logger, calls New, and runs. That's it. Adding a new service tomorrow doesn't touch main.go.


The Swappable ORM

One thing that bites teams: *bun.DB leaking everywhere. When you decide to switch ORMs, you're refactoring half the codebase.

The fix is a database.DB interface that repositories depend on, with the concrete ORM hidden behind a driver:

type DB interface {
    NewSelect() driver.SelectQuery
    NewInsert() driver.InsertQuery
    NewUpdate() driver.UpdateQuery
    NewDelete() driver.DeleteQuery
    // + standard sql methods
}

Switching from bun to sqlx is now:

ORM_DRIVER=sqlx DATABASE_URL=postgres://... ./myservice

No recompile. No refactor. The driver factory in database.New() handles the rest.


The CLI — a first-class operator

Most services have a CLI for database migrations, data seeding, and operational tasks. The mistake is treating it as an afterthought.

Here the CLI is a proper entry point at cmd/cli/main.go that calls the same internal.New(logger) as the server. Every CLI command gets the fully wired App — all services, the DB connection, everything.

Commands follow a Laravel-style naming convention:

myservice migrate up
myservice migrate status
myservice seed --fresh
myservice product list --org=<id>
myservice product:sync --org=<id> --dry-run
myservice product:import --file=products.csv --org=<id>
myservice cache:clear

New commands register themselves with a single init() call — no changes to any existing file:

func init() { console.Register(newProductSyncCommand()) }

What This Gets You

After building a few services with this structure, the benefits compound:

Testing is straightforward. Every layer has an interface. Mock the repository and test the service in pure Go, no database required.

Onboarding is fast. New engineers know exactly where to look. Business logic? service/. Data access? repository/. API contract? handler/.

Changes are local. Swap the ORM, change the transport, add a new feature — each change touches one layer.

The CLI earns its weight. Migrations, seeding, and operational commands are proper citizens, not shell scripts bolted on the side.


The Environment Config

All configuration lives in internal/service.go via NewConfig(). No config package, no viper, no YAML files for local development:

APP_PORT=8080        # gRPC client gateway
GRPC_PORT=9090       # internal gRPC server
DATABASE_URL=postgres://...
ORM_DRIVER=bun       # or sqlx
DEBUG=false

Two gRPC ports: APP_PORT faces external clients (add auth middleware, rate limiting, TLS here), GRPC_PORT handles internal service-to-service calls. Same handler, different server configuration.


Where to Go From Here

This architecture handles the 90% case well. As your service grows, the natural next steps are:

  • Add interceptors (gRPC middleware) for auth, logging, and tracing at the handler layer
  • Wire bun/migrate with versioned SQL files for production-grade migrations
  • Add a health check endpoint on a separate port for Kubernetes probes
  • Use google.golang.org/grpc/reflection on the gateway server for tooling like gRPCurl

The structure doesn't change — you're just adding more to each layer, not rethinking the layers themselves.


The full folder structure with working Go code is available at github.com/ntuple/go-microservice-example.

Share this post