Autobase for PostgreSQL® is an open-source alternative to cloud-managed databases (DBaaS) such as Amazon RDS, Google Cloud SQL, Azure Database, and more.
This automated database platform enables you to create and manage production-ready, highly available PostgreSQL clusters. It simplifies the deployment process, reduces operational costs, and makes database management accessible—even for teams without specialized expertise.
Automate deployment, failover, backups, restore, upgrades, scaling, and more with ease.
You can find a version of this documentation that is searchable and also easier to navigate at autobase.tech
Autobase has been actively developed for over 5 years (since 2019) and is trusted by companies worldwide, including in production environments with high loads and demanding reliability requirements. Our mission is to provide an open-source DBaaS that delivers reliability, flexibility, and cost-efficiency.
The project will remain open-source forever, but to ensure its continuous growth and development, we rely on sponsorship. By subscribing to Autobase packages, you gain access to personalized support from the project authors and PostgreSQL experts, ensuring the reliability of your database infrastructure.
You have three schemes available for deployment:
This is simple scheme without load balancing.
-
Patroni is a template for you to create your own customized, high-availability solution using Python and - for maximum accessibility - a distributed configuration store like ZooKeeper, etcd, Consul or Kubernetes. Used for automate the management of PostgreSQL instances and auto failover.
-
etcd is a distributed reliable key-value store for the most critical data of a distributed system. etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. It is used by Patroni to store information about the status of the cluster and PostgreSQL configuration parameters. What is Distributed Consensus?
-
vip-manager (optional, if the
cluster_vip
variable is specified) is a service that gets started on all cluster nodes and connects to the DCS. If the local node owns the leader-key, vip-manager starts the configured VIP. In case of a failover, vip-manager removes the VIP on the old leader and the corresponding service on the new leader starts it there. Used to provide a single entry point (VIP) for database access. -
PgBouncer (optional, if the
pgbouncer_install
variable istrue
) is a connection pooler for PostgreSQL.
This scheme enables load distribution for read operations and also allows for scaling out the cluster with read-only replicas.
When deploying to cloud providers such as AWS, GCP, Azure, DigitalOcean, and Hetzner Cloud, a cloud load balancer is automatically created by default to provide a single entry point to the database (controlled by the cloud_load_balancer
variable).
For non-cloud environments, such as when deploying on Your Own Machines, the HAProxy load balancer is available for use. To enable it, set with_haproxy_load_balancing: true
variable.
Note
Your application must have support sending read requests to a custom port 5001, and write requests to port 5000.
List of ports when using HAProxy:
- port 5000 (read / write) master
- port 5001 (read only) all replicas
- port 5002 (read only) synchronous replica only
- port 5003 (read only) asynchronous replicas only
-
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.
-
confd manage local application configuration files using templates and data from etcd or consul. Used to automate HAProxy configuration file management.
-
Keepalived (optional, if the
cluster_vip
variable is specified) provides a virtual high-available IP address (VIP) and single entry point for databases access. Implementing VRRP (Virtual Router Redundancy Protocol) for Linux. In our configuration keepalived checks the status of the HAProxy service and in case of a failure delegates the VIP to another server in the cluster.
To use this scheme, specify dcs_type: consul
variable.
This scheme is suitable for master-only access and for load balancing (using DNS) for reading across replicas. Consul Service Discovery with DNS resolving is used as a client access point to the database.
Client access point (example):
master.postgres-cluster.service.consul
replica.postgres-cluster.service.consul
Besides, it can be useful for a distributed cluster across different data centers. We can specify in advance which data center the database server is located in and then use this for applications running in the same data center.
Example: replica.postgres-cluster.service.dc1.consul
, replica.postgres-cluster.service.dc2.consul
It requires the installation of a consul in client mode on each application server for service DNS resolution (or use forward DNS to the remote consul server instead of installing a local consul client).
RedHat and Debian based distros (x86_64)
- Debian: 11, 12
- Ubuntu: 22.04, 24.04
- CentOS Stream: 9
- Oracle Linux: 8, 9
- Rocky Linux: 8, 9
- AlmaLinux: 8, 9
all supported PostgreSQL versions
✅ tested, works fine: PostgreSQL 10, 11, 12, 13, 14, 15, 16, 17
Table of results of daily automated testing of cluster deployment:
Distribution | Test result |
---|---|
Debian 11 | |
Debian 12 | |
Ubuntu 22.04 | |
Ubuntu 24.04 | |
CentOS Stream 9 | |
Oracle Linux 8 | |
Oracle Linux 9 | |
Rocky Linux 8 | |
Rocky Linux 9 | |
AlmaLinux 8 | |
AlmaLinux 9 |
You have the option to deploy Postgres clusters using the Console (UI), command line, or GitOps.
Tip
📩 Contact us at [email protected], and our team will help you implement Autobase into your infrastructure.
The Autobase Console (UI) is the recommended method for most users. It is designed to be user-friendly, minimizing the risk of errors and making it easier than ever to set up your PostgreSQL clusters. This method is suitable for both beginners and those who prefer a visual interface for managing their PostgreSQL clusters.
To run the autobase console, execute the following command:
docker run -d --name autobase-console \
--publish 80:80 \
--publish 8080:8080 \
--env PG_CONSOLE_API_URL=http://localhost:8080/api/v1 \
--env PG_CONSOLE_AUTHORIZATION_TOKEN=secret_token \
--env PG_CONSOLE_DOCKER_IMAGE=autobase/automation:latest \
--volume console_postgres:/var/lib/postgresql \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume /tmp/ansible:/tmp/ansible \
--restart=unless-stopped \
autobase/console:latest
Note
If you are running the console on a dedicated server (rather than on your laptop), replace localhost
with the server’s IP address in the PG_CONSOLE_API_URL
variable.
Tip
It is recommended to run the console in the same network as your database servers to enable monitoring of the cluster status.
Open the Console UI:
Go to http://localhost:80 (or the address of your server) and use secret_token
for authorization.
Refer to the Deployment section to learn more about the different deployment methods.
Click here to expand... if you prefer the command line.
The command line mode is suitable for advanced users who require greater flexibility and control over the deployment and management of their PostgreSQL clusters. While the Console (UI) is designed for ease of use and is suitable for most users, the command line provides powerful options for those experienced in automation and configuration.
- Install Ansible on one control node (which could easily be a laptop)
sudo apt update && sudo apt install -y python3-pip sshpass git
pip3 install ansible
- Install the Autobase Collection
# from Ansible Galaxy
ansible-galaxy collection install vitabaks.autobase
Or reference it in a requirements.yml
:
# from Ansible Galaxy
collections:
- name: vitabaks.autobase
version: 2.2.0
- Prepare the inventory
See example of inventory file.
Specify (non-public) IP addresses and connection settings (ansible_user
, ansible_ssh_pass
or ansible_ssh_private_key_file
for your environment
- Prepare variables
See the main.yml, system.yml and (Debian.yml or RedHat.yml) variable files for more details.
- Test host connectivity
ansible all -m ping
- Create playbook to execute the playbooks within the collection:
- name: Playbook
hosts: <node group name>
tasks:
# Start with the 'deploy' playbook, change to 'config' afterwards
- name: Run playbook
ansible.builtin.include_playbook: vitabaks.autobase.deploy_pgcluster
If you need to start from the very beginning, you can use the remove_cluster
playbook.
Available variables:
remove_postgres
: stop the PostgreSQL service and remove data.remove_etcd
: stop the ETCD service and remove data.remove_consul
: stop the Consul service and remove data.
If you find our project helpful, consider giving it a star on GitHub! Your support helps us grow and motivates us to keep improving. Starring the project is a simple yet effective way to show your appreciation and help others discover it.
By sponsoring our project, you directly contribute to its continuous improvement and innovation. As a sponsor, you'll receive exclusive benefits, including personalized support, early access to new features, and the opportunity to influence the project's direction. Your sponsorship is invaluable to us and helps ensure the project's sustainability and progress.
Become a sponsor today and help us take this project to the next level!
Support our work through GitHub Sponsors
Support our work through Patreon
Licensed under the MIT License. See the LICENSE file for details.
Vitaliy Kukharik (PostgreSQL DBA)
[email protected]
Are welcome!