Dokku: My favorite personal serverless platform

Apologies for the very off-topic reply, but I can’t help but find it a little funny that on a thread exalting a particular tool, the top comment at the time of this writing is a link to another, newer tool. Not that there’s anything wrong with sharing the link, but it does seem like here at HN we have a bit of a grass-is-greener thing going on. I would understand it more if the discussion was around how bad a tool is and someone chimed in with an alternative. And it’s not like I don’t want people to share these other projects but personally on a thread about a particular topic, the comments I find the most useful are those from people with experience in that topic sharing their opinions, tips, etc. In this case, the comment our community found the most valuable on the topic of Dokku seems to be a link to Dokploy, a project that judging by the commit history is new as of this past April.

I find it helpful to have other tools listed. I already know a decent amount about Dokku and clicked on these comments specifically to find out what other tools might be up and coming or otherwise mentioned in the space.

I’m still waiting for something built on a rootless container solution and with everything defined in git (i.e. no or limited cli commands) so that exactly what is being deployed is, at all times, tracked in git.

If the topic is “favorite personal serverless platform” then discussion of other offerings is absolutely on-topic.

> Apologies for the very off-topic reply … it does seem like here at HN …

There’s nothing more HN than filling the first page of comments with discussion of everything except the linked article.

> it does seem like here at HN we have a bit of a grass-is-greener thing

Since it’s Hacker News, not Old-But-Stable-Project-News, that seems expected? The other way it also happens, and has been happening forever, I published a new OSS project of mine ~10 years ago, went to the front-page, and 8 of 10 comments were recommending other pre-existing tools.

I think I was respectful enough to word the original comment I made to the difference between Dokploy & Dokku, not just saying one is better than the other. I’ve used both successfully and think both are great products – just wanted to share my experience. There seems to be an umbrella recently of self-hosting tools like Coolify/Dokku/Dokploy etc. so wanted to contribute to the discussion in that way. Dokploy is also an open-source project so thought the exposure might be positive on a high ranking HN post.

My comment came out crankier than I intended. I do think comments like yours are valuable, and I agree that you were respectful and informative. I’m just genuinely amused that the top comment is for a completely different tool. That’s more an observation about how we vote as a community, not about your post. I include myself in that group though as I have in the past been drawn to the new and shiny over the already known.

Same here. Actually I regularly revisit threads about Dokku, Coolify and CapRover because I know that there are references to other projects that I might have missed.

I personally appreciate it – I really like going to the comments to see other approaches and alternatives whenever something is on here. I don’t think it’s an insult nor do I think it’s out of place if done correctly. HN is one of the only places left on the internet where I expect good value in the comments section and this is one of the reasons.

Thanks for the recommendation. I’ve just given it a try and it looks great. I had tried coolify.io before, but the multi node/swarm support wasn’t great, and the registry didn’t work. Dokploy seemed to work straight out of the box.

One thing I wish it had to preview deployments though. Coolify had that. But I can live without it.

I have actions on my projects to build & publish container images to GitHub’s container registry. The deploy trigger from the workflow makes Dokploy get the latest image from the registry and run it.

I was looking at many of these “selfhosted Heroku” type of solutions recently and read many HN discussions about the different options (coolify.io, ploi, …) as I migrated to a new server and always copying, adapting nginx configs got a bit old.

I’ve landed on Dokku in the end as it’s the one with the least amount of “magic” involved and even if I stopped using it I could just uninstall it and have everything still running. Can highly recommend it!

The developer is also super responsive and I even managed to build a custom plugin without knowing too much about it with some assistance. Documented this on my blog too: https://blog.notmyhostna.me/posts/deploying-docker-images-wi…

How long did it take you to go from “making a new server / copying configs is fine” to “this is tedious enough I’d like to abstract it?”

Like, was it a years-long journey or is this the type of thing that becomes immediately obvious once you start working w/ N servers or something?

I’m trying to learn the space between “physical machines in my apartment” and “cloud-native everything” and that’s led me to the point where I’m happily using cloud-init to configure servers and running fun little docker compose systems on them.

I wanted to self-host more of my Rails projects and Dokku comes with nice Buildpack support so I can just push a generic Rails app and it’ll run out of the box. That plus that I had to set up a new server after many years made me look into that more.

That’s where I am too right now for personal projects, and I ended up reimplementing parts of Dokuploy for that, but I don’t feel much of a need to move from “fun little docker compose” for some reason

cloud-init is good, but it assumes that you treat your VMs like containers and that means you will need a lot more VMs that you constantly create and destroy and you will have to deal with block storage for persistence.

If all you do is ssh into a system with docker compose installed, you will hardly benefit from cloud-init beyond the first boot.

That’s pretty much the opposite of what I’m searching for. Getting a static site running with https on Dokku on a fresh server is done in under 2 minutes if you type quickly.

1) Run curl command to install Dokku

2) Set up domain to point to my server

3) Run 3 Dokku commands (https://news.ycombinator.com/item?id=41358578)

4) Add remote git url to my repository

5) Git push to that remote

6) Done

I have found over the years that trying new software risks immediately running into a road block in real use. There will be some detail or complexity or bug on a semi-basic requirement that goes directly to an issue in github.

Dokku is not one of those, it does what it does well and aside from a couple of cli argument ordering quirks it’s been great for my light usage. If I was using it more I’d probably want to configure entire architectures with declarative config files, I have no idea if it can do that though.

Whenever a new tool / library / plugin / whatever is evaluated by myself or my team, I spend some time on gathering some github issue / PR stats. I think this is now part of my “software engineering toolset” / best practices.

If there are too many open PRs, or unresolved tickets, OR there are too many _new_ ones, I would rather start searching for something else

Same. I do a lot of research and try to get a sense of the person/team behind it and their values, vision, dedication, if they accept outside contributions, etc.

A few years ago it became popular to document the project processes, but it all turned into generic code of conduct garbage and abstract governance pillars, as if they were writing a constitution. So looking at GitHub issues and commit history is still the best way. And very well invested time.

Dokku is actually the opposite of that. Super responsive, helpful and nice maintainer (Already mentioned by a few other people in this thread).

I was so positively surprised that I got help so quickly after asking that I started to sponsor it via GitHub immediately.

Striking while the iron is hot:
I love dokku and I’ve used it for many years.
That said, I thought I’d do something similar but with the goal of supporting k8s manifests

https://github.com/skateco/skate

It’s daemonless, multi-host with multi-host networking, service discovery and like I said, allows you to create deployments, cronjobs, and route traffic via ingress resources.

Comes with letsencrypt out of the box.

Dokku is really neat! I’ve been using it before moving to building my own Docker images and deploying with Swarm. It was also (partly) the motivation behind my own take on self-hosted PaaS, Lunni (shameless plug): https://lunni.dev/

In general, I really love the idea of running all your stuff on a server you own as opposed to e.g. Heroku or AWS. Simple predictable monthly bill really gives you peace of mind.

> In general, I really love the idea of running all your stuff on a server you own as opposed to e.g. Heroku or AWS. Simple predictable monthly bill really gives you peace of mind.

Have you found hosting you like with bandwidth expense caps? I’m looking for something like this but I don’t want surprise network bills if I misconfigure something.

> Have you found hosting you like with bandwidth expense caps?

Not exactly what you’re looking for, but solves the same problem in a different way:

I’ve been quite happy with using Hetzner’s dedicated servers which come with 1 GBit unmetered connection (unlimited bandwidth), so no surprise network charges 🙂

Note that if you saturate that 1Gbps link they will almost certainly ask you to stop. Lots of VPS offer “unlimited” but it’s really not. It’s only unlimited within their “fair use” restrictions, ie only as long as they think it’s reasonable.

Would love to be shown a counterexample provider.

That’s cloud, non-standard NICs, load balancers or vSwitches. Their standard 1 Gbps link does not charge usage fees. The one person that reported it did it on several servers, over 3+ months – and they weren’t charged, just asked to slow down or upgrade to a dedicated link.

This happened once (?) in the history of LowEndTalk and Hetzner, and that was someone who was using several servers, 27/7, over several months at 100% link utilization to shovel raw footage around.

You are very unlikely to replicate this running any type of personal infrastructure. Or anything that’s not specifically this.

I’m curious as to your thoughts around Swarm.

My concern around Swarm is around the Docker corporation, which appears to be struggling.

As a competitor, we have Nomad, but with the recent IBM acquisition, I’m concerned about Nomad’s future.

I do have some concerns about Docker Inc. and Mirantis (which now owns Docker Swarm I believe), yeah. Swarm is pretty mature though, and while I don’t think it’s going anywhere soon, I don’t think we’ll get any more core features anytime soon.

For Lunni, my plan is to add support for another orchestrator while keeping the developer experience of just working with docker-compose.yml. I really didn’t want to do K8s, but given it’s essentially an open standard now, it should be a safer bet than Nomad. I guess we’ll see when I can get to it!

My gripe with nomad was that it didn’t have init containers. You were basically forced to either have an endlessly growing job specification file or put consul-template in every single container image.

Do you mind if I ask why you chose Docker Swarm? I don’t know that much about Swarm and I’d love to know what you think about it compared to K8s (in terms of ease, nice things, things missing, etc.)

Not lunni’s dev, but a Swarm fan 🙂

I’m a swarm user, but using single node swarms.
It’s the best solution I found for deploying apps. A lot of projects publish docker compose files, and those are easily usable with Swarm after some small modifications. I’m using the setup described at dockerswarm.rocks (1) and it’s smooth sailing.

It’s a real pitty, and still surprises me, Swarm is not more popular. It’s still maintained (2) but few people still recommend it (even dockerswarm.rocks doesn’t anymore). I’ve switched to it in 2022 (2) thinking I didn’t take a lot of risk as starting with it is a really a low investment, and I’m still satisfied with it. I’ve deployed a new server with it recently.

1: https://dockerswarm.rocks/traefik/
2: https://www.yvesdennels.com/posts/docker-swarm-in-2022/

The main reason probably was the fact that I was already familiar with Docker and Docker Compose. Kubernetes introduces a whole lot of concepts that I didn’t feel like studying up, plus there was a 3-node minimum requirement. I just wanted to be able to start with a single node and be able to scale up if needed, so Swarm just felt like a natural match here.

I’m looking into K8s and other orchestrators like Nomad and perhaps will add support in Lunni at some point, but for now I believe Swarm is the sweet spot for smaller deployments (from single server up to maybe a couple hundred nodes).

There isn’t actually ( nor was there ever ) a 3 node requirement for k8s.

Etcd requires 3 boxes for HA, but nothing stops you running a single node etcd.

I personally run single master clusters, because if the master goes down, you lose management as opposed to actual service availability, so mostly I don’t care.

Now that there’s anything wrong with your preference.

I might be misremembering it, huh! Yeah, it’s pretty much the same as Swarm then (any odd number of manager nodes is valid, and if more than a half go down you only lose the management ability and everything else stays up).

I’ll look into it, thank you so much! Way back then there wasn’t a lot of choice though. I think I’ve played with Minikube but that was not recommended for production, and all the other distributions were huge (or at least I thought so!).

Not too bad, except I have no idea how many users we have :’)

Swarm still works pretty smoothly for me, although I’m worried about the Mirantis situation, too. I’m currently working on a new backend, which will also enable us to plug in other orchestrators if need arises.

> It’s often desirable to have HTTPS for your site. Dokku makes this easy with the Let’s Encrypt Plugin, which will even auto-renew for you. I don’t use this, because I’m letting Cloudflare handle this with its proxy.

Hopefully you do use TLS between Cloudflare and your Dokku (even with a self-signed cert or something), otherwise your personal sites (which are apparently sensitive enough to put behind basic auth) are being transited over the internet in plaintext.

You have to limit the traffic to that pool to prevent people accessing your server directly. But that’s not enough on its own, because other people can use CloudFlare’s IPs to scan you too, so you need some kind of auth on top or use the tunnel.

Typically my servers is behind NAT and it has no public address, one can only reached the service through the CF tunnel and my access is through VPN, this should be safe, right?

One might be avoid mass traffic interception due to malicious or corrupt BGP rules, either by accident or on purpose by a nation-state or telco. Another might be avoiding interception by your own ISP for various purposes.

Delighted to see dokku on here. It’s an amazing product and the founder is super humble and helpful.
I can’t afford to throw much money at it now but it would be great if more people supported it financially

My experience with dokku was pretty poor. It was quick to start with but on my VPS crashing and restarting, my apps would not relaunch. I’d have to re-run the dokku commands again. Perhaps I did something wrong but I inevitably switched to a single-node k8s setup as it ended up being more reliable

Dokku is great, but historically it didn’t really handle resilience. It looks like there’s now a K3s scheduler (added earlier this year) which would mean I could have use a Kubernetes operator for a replicated database as well as have the app running on multiple boxes (in case one fails). It looks like it’ll even setup K3s for you. The docs don’t seem to go into it, but hopefully the ingress can also be setup on multiple boxes (I wonder if it uses a NodePort or the host network).

I was sad when Flynn died (https://github.com/flynn/flynn), but it’s great to see Dokku doing well.

> Dokku is great, but historically it didn’t really handle resilience.

Would you mind elaborating a bit on this? I’m exploring some serverless options right now and this would be useful info. Do you mean it’s not really designed out of the box for resilience, or that it fails certain assumptions?

I’m not the person you’re responding to, but I believe I can answer that question as well.

Dokku essentially just started a container. If your server goes down, so did this container because it’s just a single process, basically.

Other PaaS providers usually combine it with some sort of clustering like k3s or docker-swarm, this provides them with fail over and scaling capabilities (which dokku historically lacked). Haven’t tried this k3s integration either myself, so can’t talk about how it is nowadays.

Yea, this. Dokku was basically a single-server thing. If that box dies, your site goes down until you launch it on a new box. That might not be a huge deal for smaller sites. If my blog is down for a day, it’s not a big deal.

With a cluster, if a server goes down, it can reschedule your apps on one of the other servers in the cluster (assuming that there’s RAM/CPU available on another server). If you have a cluster of 3 or 5 boxes, maybe you lose one and your capacity is slightly diminished, but your apps still run. If your database is replicated between servers, another box in the cluster can be promoted to the primary and another box can spin up a new replica instance.

Dokku without a cluster makes deploys easy, but it doesn’t help you handle the failure of a box.

Yeah the k3s scheduler is basically “we integrate with k3s or BYO kubernetes and then deploy to that”. It was sponsored by a user that was migrating away from Heroku actually. If you’ve used k3s/k8s, you basically get the same workflow as Dokku has always provided but now with added resilience.

Note: I am the Dokku maintainer.

Can someone explain what this is for someone who doesn’t know what Heroku is, and why it’s called serverless when it clearly requires a server?

“Serverless” means there’s no stateful application server in your architecture. No instance, just callable code. Shell script, not daemon.

You write narrow business-level functions that take inputs, do their business, and give you the output. They’re called, they run, they terminate and carry nothing forward. You deploy them to a hosting platform which will handle making them available, routing requests, and all the other stuff that’s incidental to doing the work. The only thing that’s your concern is the logic that turns inputs into outputs. At least that’s the pitch.

Technically a PHP script that you run as CGI through Nginx is “serverless” that way, though of course it’s hosted with a software server running on a hardware server. It’s serverless because you write a PHP script and it doesn’t care what runs it or what’s going on around it. It doesn’t need to work within the context of an application server like an endpoint in a Django or a Rails app would.

Someone else could own a bunch of servers running Nginx, you’d give them prettyprint.php, and it would pretty-print your JSON or whatever at the URL they’d give you.

Services that do this are called serverless platforms. The hosting model is called “functions as a service” (FaaS). If you like the architecture but you don’t want FaaS from Amazon, Heroku, or some other third party, you can host yourself a mini FaaS with something like Dokku.

heroku is (was?) a platform for hosting web apps with a lot of nice ergonomics, to let you run commands like “heroku up” to get your app running on the web with minimal fuss.

this appears to be a project to give you the niceities of a PAAS, without actually providing the platform.

My major gripe with dokku is that there is no way to define the configuration in a file rather than executing the commands manually.

Otherwise: totally agree, great tool for self hosting.

We have ansible modules (https://github.com/dokku/ansible-dokku) that cover the majority of app management if thats what you want. The reason I am hesitant to do it in something like `app.json` is purely because one might expose Dokku to users who only have push access and some of those commands can be fairly destructive.

Disclaimer: I am the Dokku maintainer.

Thank you! I was hoping for something less intimidating than going full ansible/terraform.

Essentially something that captures all dokku invocations and could be transferred to another machine. Is app.json this?

I really hate adding knobs – it increases the amount of work I need to do to maintain and support the project.

Long term, I’d like to port the ansible modules over to being maintained internally by the dokku/omakase project, and then maybe that could be a plugin that folks could run from within their deploy.

I believe they are talking about the Dokku commands that are needed to set up a new Dokku app.

For example for a static site that would be the following:

    dokku apps:create dewey.at
    dokku domains:set dewey.at dewey.at www.dewey.at
    dokku letsencrypt:enable dewey.at

That’s also one of my wishes to get improved, currently I just have a long text file where I store them so that if I move servers I can just re-run them if needed.

Could you put them in a .sh file and then just run `sh setup_dewey.sh`? Maybe put `&&` between them so that if one fails, it won’t keep running through the script?

Yep, in theory I think that should work nicely. So the recovery procedure after a server died would be to restore the dokku data directory from backup and then re-run all the commands. I haven’t tested that but I think that should do the job.

Right now I keep the list of commands more as a reference to look up things like how I mounted a volume or my naming scheme for data directories.

One more upvote for Dokku. Been using it for as long as I can remember hosting things on servers. It is such an incredible piece of software. And open source to boot. If any of my projects ever make money, Dokku will be the first project I’m funding.

I started out with Dokku and it was fine, but ultimately switched to Coolify solely because it has a web UI. Dokku Pro has one as well, but my use case was primarily just for hosting a demo and spinning up instances from GitHub PRs so didn’t feel worth spending money on.

that’s what I do and its easier to setup and understand than dokku that’s for sure. tried using dokku multiple times and ran into so many issues with it. with traefik you literally just have to copy and past few lines in a compose file and push to docker hub and traefik will pick it up.

It’s the backend that implements serverless architecture. A serverless server, I guess. Roll your eyes if you like, but “serverless” is still a snappier term than “declarative on-demand server provisioning, configuration, and scaling” and most people are into that whole brevity thing.

Except Dokku doesn’t do those things. Dokku doesn’t scale your app automatically and it doesn’t shut it off when it’s not being used. It runs your web server process continuously, handles some 12 factor config, and does some nginx stuff for you. Until this year it didn’t support managing a cluster at all and was entirely focused on single box deploys. The scale command just runs more processes on the same machine which if you’re not using Node is probably not even a good idea to do.

Here’s a better question. For people that roll their eyes at the mere mention of “serverless” (like me), what is the value proposition of Dokku over VMs and your own dockers?

Don’t convince me it is like AWS serverless. Convince me to give up VMs and docker images.

One benefit for me was that a vps with 2vcors and 2-4gig of ram is about the same price as one of these app engine services. So I can run 3 or 4 dokku apps including Postgres, redis, memcache connected to them on my own vps and still have margin when inspiration strikes. I moved a production app from heroku to dokku and saved hundreds a month and still got tons and tons more compute.

In terms of money it might end up cheaper for the infra itself (especially if it runs in your garage)

In term of your personal time, well, you’ll not grow a business on it without hiring people to maintain it or spend a lot of time doing devops

> Convince me to give up VMs and docker images.

I am in the same boat. Using VMs and docker images and not sure how this would benefit me.

I have looked AWS serverless stuff. They appears to solve problems I don’t have.

Well its problem is that you still have to… install the entire thing. Whatever serverless means, it should allow programmers to focus on their code, not on the platform. Like, at all.

Is there any advantage of Dokku over using Kubernetes (I already have a person single-node cluster).

I initially setup Dokku on K8s, but since it would just deploy to that same server it makes more sense IMO to just use K8s

If you’ve already got k8s set up and you’re comfortable with it, I’m not sure that Dokku offers much?

But… wow. For my single non-k8s server at home, Dokku makes getting stuff running behind HTTPS about as simple as I could hope for!

I liked Dokku when I was still happy using docker, but since I started working on https://www.dbos.dev/, I value microVMs way more.

The problem with Dokku is that, while its easy to use if you have experience in devops, well.. you still need to know devops! That’s not what I call serverless…

Don’t you need to run everything in bare metal to effectively leverage microVMs? AFAIK, unlike containers, you can’t efficiently run microVMs on cloud VM instances.

You need nested virtualization, which many VMs support — it is architecture dependent. But, yes, to maximize the benefits you’ll want to run on baremetal.

From the standpoint of a cloud user, the kind that likes Dokku, the experience is cheaper/faster/more secure if the infra uses uVMs vs containers.

I’ve been using a different tool that provides great developer UX for managing containerized web apps on your own servers. Its dead simple and does things like zero-downtime deploys and remote builds.

https://kamal-deploy.org/

I use it with rails but it works with any containerized web apps.

I’ve looked into this too, but it always felt like it’s best suited for a “one app per server” model, and not really like Dokku which makes it easy to run many workloads on a single server.

Did I misunderstand something there?

I’ve got two apps running on two servers (ie both on both) using Kamal and Traefik and it works great. I didn’t set it up but I deploy with it almost every day.

I already run multiple apps on a single server with Kamal and it works fairly good. Sometimes there are some issues especially if you use similar resources across system such as redis. But overall it is stable and works good.

Looks neat, but what exactly makes it “serverless”? It’s literally an application that you have to run on your server.

Edit: turns out (thankfully) that it’s only the author of the article using that term. The project site (https://dokku.com/) is very descriptive.

I always understood “serverless” as, your main application isn’t running all the time, but once you make a request to it, another process starts your main application and then your request gets forwarded to the “main” application.

But I never really got it, so I may be completely wrong.

Kinda, but you are describing an implementation detail. More broadly, here’s how it works:

Infrastructure as a Service (IaaS) – you rent a VM with a publicly accessible IP address. Everything else – patching/updating the OS, deploying your application code or binaries, process lifecycle management, logs, TLS certs, load balancing multiple servers and more – is your responsibility. Example: EC2.

Platform as a Service (PaaS) – the provider also manages the OS for your VM, including deploying and running your code on it, restarts, a logging pipeline, providing a HTTPS URL, scaling to multiple servers and more. All you have to do is write your application code to start a web server and listen for web requests on a particular port. Example: Heroku.

Functions as a Service (FaaS) – this goes one step further, and the concepts of web servers, ports and HTTP requests/responses are also abstracted out from your application code (hence “serverless”). You write a function with a set of inputs and outputs, and it’s up to the platform to execute this function whenever demanded. The request can be sent via HTTP or a message queue or something else entirely. Your code itself doesn’t have to care. Example: AWS Lambda

Practically, it’s always running, as otherwise you’ll get a cold start delay, but that’s close enough. It doesn’t mean there’s no server, so the “lol how is it serverless if you have a server” meme is tiring.

I am wondering why php wasn’t more popular? Think php as a serverless platform:

1. 10 minute to learn, simple syntax, rich functions

2. deploy with standard tools like ftp

3. easy scaling with php-fpm.

4. not exactly docker, but vhost works in most cases. And vhost can be containerized.

5. insta hot-reload

6. dirt cheap cost. Hi dreamhost

7. no vendor lock-in bullshit, supported by everyone everywhere

well, at least it was the case used to be

All of those are easier in Python or Ruby or Javascript. And all of those languages have done thing properly to make you code lesser , better with lesser errors and better performance.
Thats why PHP become a dinosaur.

> not exactly docker, but vhost works in most cases

You are understanding Docker and Vhost wrong then..

Does PHP run your database? Your caching layer? Handle SSL? Can you run a queue with PHP? Would you? If your answer to any of those things is “no”, then PHP isn’t a serverless platform. It is a language that you use to build web applications. That makes it very different from Dokku, Kubernetes, etc.

But yes, you’re right that PHP is still a great choice for building the “web application” portion of your infrastructure.

To be fair, serverless platforms doesn’t always provide queue or database. Usually they come alone with some sqlite or KV store for the convenience.

SSL was provided by the php-fpm part, with either Apache or nginx, like most “platforms” do. Modern platforms may choose less-popular fronts like Caddy, and Caddy does support php_fastcgi.

The purpose of serverless is easily deploy applications without worrying about / managing infra at all. The only reason you don’t need to run your own infra with PHP is because it doesn’t do anything except the web application. As you noted, it doesn’t even do the web-serving. Most serverless application platforms are then paired with hosted databases, hosted queues, etc, so that you don’t need to manage those either. But yes, only some of those additional pieces can be described as “serverless”, e.g. AWS Aurora is an actual serverless database, vs DigitalOcean’s hosted Postgres, which does not scale in a “serverless” way. Regardless, minimal management of the database is required on these platforms.

Dokku in turn is a replacement for those platforms and makes it easy to host your own (insert some part of your platform stack here). That is the purpose of a PaaS, which is what Dokku is and what PHP certainly is not. Yes, the OP maybe should’ve said “My favorite personal PaaS”, but that is less catchy, and I think calling it a “serverless platform” is close enough to the truth and gets the point across.

> Most serverless application platforms are then paired with hosted databases

> Dokku in turn is a replacement for those platforms and makes it easy to host your own

I tried to search

database site:dokku.com/docs/

or

queue site:dokku.com/docs/

But failed to find anything relevant. Am I doing this wrong?
Where does Dokku as a “replacement platform” provide the database or queue capability?

> purpose of serverless is to provide scaleable infra which you don’t need to manage

> Dokku is an extensible, open source Platform as a Service that runs on a single server (from https://dokku.com/docs/)

and how does Dokku on a single server solve the “scaling” problem exactly?

> As you noted, it doesn’t even do the web-serving

Technically speaking, php can. https://github.com/swoole/swoole-docs/blob/master/modules/sw…

it’s just not widely used.

Dont get me wrong, Dokku is a fine project of its own, but it doesn’t seem to tackle the problems you mentioned either. On the other hand, if php is not enough, the LAMP can do everything.

I’d recommend reading more thoroughly about Dokku (or PaaS in general) since it is somewhat clear you don’t understand the value prop. Just grepping for certain terms isn’t gonna get you very far.

Dokku is Docker + Buildpacks + Orchestration. You give it a buildpack (developed/popularized by Heroku) to specify your application, which is the main “serverless” part. Then you tell Dokku that you want datastores or queues or whatever, via plugins(1). Dokku handles pulling the relevant docker images, running them, and linking it all together. In other words, Dokku is open-source Heroku, where you can add databases and such with a single command, specify your app with a buildpack, and you’re off to the races.

If Dokku only did the application via buildpack part, people would not find it nearly as useful and PHP on a webhost somewhere would indeed be a good alternative.

But Dokku is not that, it is a PaaS, so it can also run your PHP application just as easily. And if you’re self-hosting Postgres and RabbitMQ via Dokku, why would you run your PHP on Dreamhost? You probably wouldn’t, you’d throw a PHP buildpack up there and call it a day.

(1) https://dokku.com/docs/community/plugins/?h=plugins#official…

ok, but “pulling the relevant docker images” does not scale, bro. There’s a gazillion of parameters to fine tune a Postgres alone, and trust me, running db inside a container is bad, as stateful containers are terrible in general.

And I really do hope scaling should be as easy as “pulling docker images”

So, how is PHP + queue library different than Dokku(or PAAS as you call it) + queue plugin? Other than the fact that you write it all in yaml/json/gui instead of code?

> “I’d recommend reading more thoroughly about Dokku (or PaaS in general) since it is somewhat clear you don’t understand the value prop.”

My cynical and uncharitable translation of this and the rest of your post for the GP: We’ve invented platform or PAAS or devops so that we can carve-away work from devs that used to traditionally do this kind of stuff in partnership with infrastructure, and you’re just not understanding it because you’re coming from a dev perspective.

It’s all just evangelism/marketing-push to gather the online webosphere mindshare and capture the development process so that all that’s left for the downstream individuals (developers) is nothing but config/yaml/plugin-ecosystem and even that stuff only the ordained priests of DevOps understand (or are allowed to touch), again leaving developers with even less, making them glorified code-monkeys that need to spit out CRUD screens. It’s either that, or put “DevOps” on your resume or title, almost like a mafia-style shake-down.

It works, sure, provides immense value sometimes and gets you to market quickly sometimes, sure, etc. But damn is it toxic and detrimental for our community in the long run. In an alternate universe or timeline we could have gone some other direction, but this is what we have for now, so we have to live with it. DevOps and UX will many years from now be known as two very bad paths that we allowed our industry to be dragged down into.

But that’s the reason why serverless has mostly failed so far — because it is not stateful. And that is only because most existing offering don’t have the correct primitives.

By that I mean that if the programming model has a way to automatically capture state, an implementation framework can manage it automatically on behalf of the application, while the user (person writing the app logic) doesn’t have to care about the state.

Is anyone running truenas scale for this kind of purpose. I haven’t used it but its architecture around k8s seems extremely promising. For most use cases a simple docker container is all you need but sometimes running other apps like grafana with a k8 manifest is easier to manage in one vps and gives you the flexibility of a cluster. Just curious.

I used to use it but what got me was letting my Dokku install get stale and then upgrading a whole bunch of versions in a row. The old plugin broke, the new one wasn’t compatible, there were version issues.

Nowadays I just run Postgres directly on my Debian box and just create a new user/DB for ever application, then set an env variable for the Dokku app to connect. Postgres is so solid to begin with that it requires no babysitting unless you have very intense workloads (at which point either use a hosted solution or start thinking about how you’ll do your own DBA).

I’ve been using dokku for years. Constant useful updates over time and the developer was super helpful with a niche case using old documentation.

I have a heroku app that I’d love to try migrating. It’s a pretty simple express.js single page app running on heroku’s lowest level, uses firebase and has no database or other backend dependencies. The domain is on godaddy and it uses Cloudflare for DNS. Heroku’s “Automated Certificate Management” takes care of the SSL cert.

The main issue is that it’s for playing a game, and the game is held in-memory, and once a day heroku restarts their servers, so everyone gets kicked out of the game they’re in when it restarts with cleared process memory. I need to fix this by migrating and I don’t have time.

If anyone feels like this migration would be something that they have relevant experience for and which they could do confidently, please get in touch. Email in profile.

Your server will likely occasionally have to restart regardless of whether you migrate or not. Maybe you could look at using Redis to store your game state. I think Heroku has a free add on.

Sometimes the slighly more complicated (Dokku is still a very thing bash wrapper around running git, ssh and docker commands) is simpler just because they have a great documentation and other people using it.

Are there plugins or anything that will allow Dokku to run queues of batch jobs? Skimming the docs there is clear support for Heroku-style apps and one-offs that can be launched via cron, not so much for batch although it would be quite useful.

Dokku maintainer here.

You may wish to look into a message processing system in your language of choice and run that as a daemon in Dokku. We have plugins for various datastores, and commonly I see folks just connect their worker processes to that. It’s much lighter weight than spawning new containers for each workload.

In my case spawn overhead is negligible relative to the jobs themselves, my existing experience with AWS Batch has been pretty good. I took another look at the Dokku documentation and extending `run:detached` would work well enough, though maybe it’s time for me to revisit Airflow or something in that direction.

Any suggestion for a simple FaaS platform that isn’t OpenFaaS? Fn Project looked promising, but their repo looks abandoned (more than one year without commits).

Looking at the examples in the post above,
and looking at the Dokku site and documentation

Is it the case that there is no visual in the free version?
Just hacking around some files?
That is not that user friendly and certainly does not
really remind me of Heroku.

The GUI you get with the Pro version looks good.
and only a bit more than $800 for life.

IMHO Dokku still outperforms all other open-source alternatives for deploying Rails apps. There are a few proprietary alternatives that still manage the job with far more simplicity, but those are paid… I have tried to deploy with Kamal, DHHs Junta preferred way, but is still not better than Dokku in it’s management and simplicity, and top of that, if follows the framework’s latest trend of poor-to-no documentation.

The documentation is a major turn-off. I havent revisited the situation in a few months, maybe things have changed, but could not deploy a single app to my traditional cloud vm’s… I never struggled that much with Dokku, that’s why the comparison…

I love dokku. And josegonzalez is always a huge help.

I pay a monthly support to dokku. You should too. Jose will help you either way, but I feel slightly less guilty when I ask questions and he immediately resolves them for me in the slack channel. Don’t you want to use this incredible piece of software guilt free?

Dokku is AMAZING. I’ve been using it for about 6 years, never had a single problem with it and I host a lot of apps on a single instance server. Can’t recommend it enough

Currently using Convox a lot, but miss the simplicity of Heroku. Anyone know if there’s a good comparison breakdown of all the PaaS options out there?

No. With Dokku you can just push to git remote and it’ll build, deploy the image, set up LE certificates, roll out the app with zero downtime (if you want). To get this running you’d have to do some manual stuff with git commit hooks, but that’s just one small part of Dokku.

I have gitlab-runner on my VPS, all it does is `docker compose up`, that already includes a traefik setup with LE certificates.

Zero-downtime does sound interesting though, and is probably better than `traefik.http.middlewares.test-retry.retry`.

I see compose in production all the time – especially from folks that want compose support _in_ Dokku. I bought this up with the compose project manager a few months back. It seems like an interesting use case but it didn’t seem like the Docker folks were… aware that this was how folks used docker compose? There is a project out there – Wowu/docker-rollout – that sort of provides this but it has some rough edges.

Disclaimer: I am the Dokku maintainer.

My understanding is that they are focused on the local development loop at the moment, especially with the acquisition of Tilt. That said, I don’t work there so take this all with a grain of salt.

I’m going to confess something: I still do it oldschool. A single box with a SQL server and a webserver running on it. I’ve taken courses in Docker and whatnot but never applied them.

When you’re hosting a single-node cluster, what value do these docker-based tools offer? Is it the fact that you can use a dockerfile to declare your OS-level dependencies consistently?

I dropped armhf (32 bit arm) a few releases ago. It was painful to maintain and the few users of that were older Raspberry PI installs. I think there are other tools out there that better support low-powered platforms (piku comes to mind).

ARM64 should be fine, with some caveats:

– Dockerfile/nixpacks support is great! Just make sure your base images and your Dockerfile supports ARM64 building
– Herokuish _works_ but not really. Most Heroku v2a buildpacks target AMD64. This is slowly changing, but out of the box it probably won’t build as you expect.
– CNB Buildpacks largely don’t support ARM64 yet. Heroku _just_ added ARM64 support in heroku-24 (our next release switches to this) but again, there is work on the buildpacks to get things running.

I run Dokku on ARM64 locally (a few raspberry pis running things under k3s) and develop Dokku on my M1 Macbook, so I think if there are any issues, I’d love to hear about them.

Disclaimer: I am the Dokku maintainer.

What’s funny is before I read this I thought Dokku was just used as a convenient driver for kitchen testing and stuff like that. I never really thought of it as a PaaS!

Self-hosted solutions are the way.

No one will be stealing your data as the big corps do.

Less chances to overrun your budget because of how cloud platforms conveniently have no breaks on utilisation of resources.

> No one will be stealing your data as the big corps do.

We take care of that ourselves by self-hosting and having faulty or unmonitored backups 🙂

Digital Ocean is even on the expensive side these days. There are a lot of vps providers that offer decent pricing and include bundled bandwidth. Some of the hosts may not be good for business-use, but should be fine for personal stuff.

I do have a hard time understanding how is this different from setting up a VM with docker running on it.

I don’t see any added benefits in therms of the extent of configuration I need to deploy. What is the new thing Dokku and other similar services bring to the table? What is the extra configuration I don’t have to do if I go with it?

Salesforce’s “no software” slogan dates back to a time when software was sold in boxes on store shelves. They were one of the first (possibly the first) cloud-based business app. The slogan only looks weird in hindsight because the term “software” now includes SaaS apps.

Most of the time I see “serverless” used these days, it’s referring to the fact that server specifics are abstracted away from the application itself’s deployment artifacts. Instead it just runs on a platform of some kind without worrying about having to be built for this version of this OS, etc.

While it is being used a bit recklessly here, taking it literally is about as insightful and constructive to discussion as pointing out that “cloud” servers are located on the ground.

I would even defend its usage here by pointing out that it’s entirely possible to use this at a company in which the servers are managed by one person or team, and the developers building applications simply interact with the service and never touch a server themselves. Neither team has to touch each others scope, making it indistinguishable from conventional “serverless” approaches in which the decoupling occurs across company rather than across team within one company.

I honestly don’t see the need to change it. The first use of “serverless” is from this article:

https://readwrite.com/why-the-future-of-software-and-apps-is…

In it, he points out:

>The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them.

I don’t know how anyone could interpret “serverless” as meaning there’s no server involved at any point in the application’s execution, and if they did, I’m not sure what harm it causes? It seems like the only objection here is a pedantic urge to be more correct.

It’s an interesting debate, and if I may intrude on this conversation, here is some reasoning to support the opinion that more accurate and descriptive terms than ‘serverless’ could be found.

Abstracting the role of a component does not reduce its importance, e.g. the skeleton of an endoskeletal animal is encapsulated behind flesh, while still providing vital support. It would not make sense to call these animals ‘boneless’, since to do so would confuse them with the class that does lack bones entirely.

In the software realm, there are some applications that do not require a server to perform their core functions, which could rightly be said to have a ‘serverless’ architecture. On the other hand, those that do have a server, but one that is managed by software instead of manually, would more appropriately be placed under a label like ‘abstracted-server’, ‘managed-server’, or even ‘auto-server’ architecture. Yes, these terms have more syllables, but of course acronyms or abbreviations could be formed.

This debate is not simply pedantic. Clarifying terminology helps to achieve a more consistent mental model. Additionally, if future generations inherit our terminology, a term such as ‘serverless’ would create more confusion for the future developers who must be taught that such an architecture does, in fact, have a server.

Can we not do this? Everyone knows that “serverless” doesn’t actually mean there are no servers. It’s not productive to do this “haha gotcha!” trope every time someone uses the serverless term.

Serverless refers to the fact that you can launch individual workloads on the platform while abstracting away the underlying infrastructure. Yes, to set up dokku you still need to provision a server. But to deploy an application onto dokku after it’s been set up, you do that without worrying about provisioning new infra for your app. That’s what is “serverless” about it, and it’s a perfectly acceptable use of the term.

“Serverless” means 1- pay for what you use. 2- No infra setup, ever. Dokku requires a server that you have to manage. It is not serverless. You should never hit a scaling wall.

Azure Functions are serverless to us, but for the team developing and deploying that feature they are not serverless.
Dokku provides a tool so that once you deploy projects they can be ‘serverless’.

1- That would throw out serverless with (free) unlimited use (A la PocketHost)

2- Then getting a third party to set up Dokku and then using that would qualify
(Because it’d be the same as getting AWS to setup their server abstraction)
The platform is serverless, you hosting it probably not, maybe server-light, as you setup the abstraction and use that for many apps

> Can we not do this? Everyone knows that “serverless” doesn’t actually mean there are no servers.

Then maybe people shouldn’t use a term that means “there are no servers”. One doesn’t get to complain if they use a word to mean something the opposite of its actual meaning, and then people don’t like it.

Are you the sort of person that says “ackshewelly people in orbit aren’t weightless” or complains that wireless headphones technically contain wires, or that motionless rocks are actually moving really fast because the Earth is moving through space…

It’s just annoying.

It is, but “We provide a programmatic interface for deployment that allows deploying docker containers on a VPS or server that you control” doesn’t have a good buzzword.

This is the part that gets me every time. It sounds pretty neat, but.. if it only works on one server – what’s the point?

What about things like scaling, or even just what if your one server runs out of resources to fit more apps on?

It‘s a tradeoff: a real PaaS is more managed, fault tolerant, and scalable, Dokku is much less expensive, especially with multiple projects.

One server scales vertically and can serve a good number of projects and users. Huge spikes eg. due to attacks, lead to outages instead of runaway bills.

We’ve supported multiple servers for a few years and have had official k3s support since the beginning of the year, so not just one server anymore. We even support managing the servers associated in a k3s-based cluster.

Disclaimer: I am the Dokku maintainer.

Ohhh, I stand corrected. I don’t think that was an option the last time I looked at Dokku. I see the schedulers section in the docs now, thanks for pointing it out!

Does the k3s scheduler work with existing non-k3s k8s clusters as well?

You can still solve for that just fine in a multi-Dokku setup. It’s just that Dokku won’t do the coordination for you. Sometimes you do want something more integrated like k8s/openshift/nomad instead; sometimes not.

Dokku’s multi-server offering is based on k3s. We interact with k3s but offload any actual clustering to k3s itself as it does the job better than Dokku could 🙂 You can also just tie Dokku into an existing K8s cluster on your favorite cloud provider instead.

Disclaimer: I am the Dokku maintainer.

>what’s the point

To get started in a simpler way, and in a way that solves 80% of use cases.
Once you need to scale, then you can scale. Why worry about that upfront with all the complexity it entails?

>what if your one server runs out of resources to fit more apps on

There’s vertical scaling. Rent a bigger server.

Dokku is great, but have you tried https://ptah.sh ? 😉

(sorry)

This is the service I have been working for the lasts months alongside my 9-5. Heavily inspired by Coolify, but it is based solely on Docker Swarm to save the development efforts on other features.

Also, it is a bit opinionated to adjust the UX to what I need myself, so there are slight deviations from the way how others work with Swarm.

I have a short vid which I have recorded today on how one could easily deploy WordPress to any VPS: https://youtu.be/k34Zdwcsm6I

It covers usage of the 1-Click apps templates which speed up everything “a little bit”.

You May Also Like

More From Author