As part of our exploration of Customer Engagement Management (CEM) options, flexibility and control over data and processes emerged as a top priority. In pursuit of this, we evaluated several open-source solutions, and Dittofeed stood out as a compelling option. It’s actively used by a few clients at production scale, making it a solid choice for our needs.
Dittofeed comes bundled with a Docker Compose setup, which installs all services within containers. However, to meet our requirements, we needed to integrate Dittofeed with our own Postgres database. This required some customization of the Docker Compose file and setting up credentials correctly.
Also, written at /diary/writing/moving-to-hetzner/moving-all-apps-from-azure-to-hetzner.md
Below is a breakdown of the setup process:
Step 1: Preparing the Postgres Database
We needed to ensure that our local Postgres database was configured properly to support Dittofeed. This involved:
Creating a database user and setting a password:
We used PGCLI commands to:
- Create a user role and set the password.
- Grant the required privileges to allow the user to create databases and seed data.
pgcli -h localhost -p 5432 -u postgres create user df_pg_user; ALTER ROLE "df_pg_user" ENCRYPTED PASSWORD 'myPassw000rrd'; ALTER USER df_pg_user WITH CREATEDB; # create database create database dittofeed; use dittofeed; grant all privileges on database dittofeed to df_pg_user; # <https://stackoverflow.com/questions/22483555/give-all-the-permissions-to-a-user-on-a-db> GRANT ALL PRIVILEGES ON DATABASE "dittofeed" to df_pg_user; GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO df_pg_user; GRANT ALL PRIVILEGES ON ALL SEQUENCES IN SCHEMA public TO df_pg_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON TABLES TO df_pg_user; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL PRIVILEGES ON SEQUENCES TO df_pg_user;
-- connect to the new database pgcli -h localhost -p 5432 -u df_pg_user -d dittofeed
Step 2: Configuring Environment Variables
We added the necessary database credentials to the .env
file that Docker Compose reads from. This included:
- Database host
- Port
- User credentials These values allow the Docker containers to connect seamlessly to the local Postgres server.
.env
file:
# Database configuration
DATABASE_NAME=your_database_name
DATABASE_USER=your_database_user
DATABASE_PASSWORD=your_database_password
DATABASE_PORT=5432
# ClickHouse configuration
CLICKHOUSE_USER=your_clickhouse_user
CLICKHOUSE_PASSWORD=your_clickhouse_password
# Application secrets
PASSWORD=your_application_password
SECRET_KEY=your_secret_key
# Deployment-specific configuration
DASHBOARD_URL="https://your-dashboard-url.com"
DASHBOARD_API_BASE="https://your-api-url.com" # Can be the same as the DASHBOARD_URL.
Step 3: Updating the Docker Compose File
Since we wanted to use our local Postgres instead of the default Postgres container, we had to modify the Docker Compose configuration. This meant:
- Removing the Postgres container specifications from the original file.
- Setting up access to the local Postgres server using the
host.docker.internal
variable.
Here’s the updated Docker Compose file:
./dittofeed/docker-compose.lite.yaml
file
version: "3.9"
x-database-credentials: &database-credentials
DATABASE_USER: ${DATABASE_USER:-df_pg_user}
DATABASE_PASSWORD: ${DATABASE_PASSWORD:-mLuPueBKDS}
x-clickhouse-credentials: &clickhouse-credentials
CLICKHOUSE_USER: ${CLICKHOUSE_USER:-dittofeed}
CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD:-password}
x-backend-app-env: &backend-app-env
<<: [*clickhouse-credentials, *database-credentials]
NODE_ENV: production
DATABASE_HOST: ${DATABASE_HOST:-host.docker.internal}
DATABASE_PORT: ${DATABASE_PORT:-5432}
CLICKHOUSE_HOST: ${CLICKHOUSE_HOST:-http://clickhouse-server:8123}
TEMPORAL_ADDRESS: ${TEMPORAL_ADDRESS:-host.docker.internal:7233}
WORKSPACE_NAME: ${WORKSPACE_NAME:-Default}
AUTH_MODE: ${AUTH_MODE:-single-tenant}
SECRET_KEY: ${SECRET_KEY:-GEGL1RHjFVOxIO80Dp8+ODlZPOjm2IDBJB/UunHlf3c=}
PASSWORD: ${PASSWORD:-password}
DASHBOARD_API_BASE: ${DASHBOARD_API_BASE:-http://localhost:3000}
services:
temporal:
container_name: temporal
image: temporalio/auto-setup:${TEMPORAL_VERSION:-1.22.4}
restart: always
environment:
- DB=postgresql
- DB_PORT=${DATABASE_PORT:-5432}
- POSTGRES_USER=${DATABASE_USER:-df_pg_user}
- POSTGRES_PWD=${DATABASE_PASSWORD:-mLuPueBKDS}
- POSTGRES_SEEDS=${DATABASE_HOST:-host.docker.internal}
- DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/prod.yaml
ports:
- 7233:7233
volumes:
- ./packages/backend-lib/temporal-dynamicconfig:/etc/temporal/config/dynamicconfig
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- dittofeed-network-lite
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7233"]
interval: 10s
timeout: 5s
retries: 10
temporal-ui:
container_name: temporal-ui
image: temporalio/ui:${TEMPORAL_UI_VERSION:-2.22.1}
restart: always
depends_on:
- temporal
environment:
- TEMPORAL_ADDRESS=temporal:7233
- TEMPORAL_CORS_ORIGINS=http://localhost:3000
ports:
- 8080:8080
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- dittofeed-network-lite
lite:
image: dittofeed/dittofeed-lite:${IMAGE_TAG:-v0.18.1}
restart: always
ports:
- "3000:3000"
depends_on:
- temporal
- clickhouse-server
environment:
<<: *backend-app-env
env_file:
- .env
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- dittofeed-network-lite
admin-cli:
image: dittofeed/dittofeed-lite:${IMAGE_TAG:-v0.18.1}
restart: always
entrypoint: []
profiles: ["admin-cli"]
command: tail -f /dev/null
tty: true
depends_on:
- temporal
- clickhouse-server
environment:
<<: *backend-app-env
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- dittofeed-network-lite
clickhouse-server:
image: clickhouse/clickhouse-server:23.8.8.20-alpine
restart: always
environment:
<<: *clickhouse-credentials
ports:
- "8123:8123"
- "9000:9000"
- "9009:9009"
volumes:
- clickhouse_lib:/var/lib/clickhouse
- clickhouse_log:/var/log/clickhouse-server
networks:
- dittofeed-network-lite
blob-storage:
image: minio/minio
restart: always
profiles: ["blob-storage"]
ports:
- "9000:9000"
- "9001:9001"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: password
volumes:
- blob-storage:/data
command: server --console-address ":9001" /data
volumes:
clickhouse_lib:
clickhouse_log:
blob-storage:
networks:
dittofeed-network-lite:
driver: bridge
Step 4: Fine-tuning and Testing
After configuring everything, we went through trial and error to ensure seamless connectivity between Dittofeed and the local Postgres server.
- We verified that Dittofeed could read and write to the database.
- Seeded the database with necessary configurations using SQL scripts.
- Tested if the CRM functionalities were operational and if everything synced properly with the backend database.
Debugging
Here are few use commands for debugging and observing:
# sudo systemctl disable apparmor.service --now
# sudo service apparmor stop
# sudo aa-status
# this was important to make it work
# <https://stackoverflow.com/questions/54279514/how-to-stop-running-container-if-error-response-from-daemon-is-cannot-kill-con>
sudo aa-remove-unknown
docker compose -f docker-compose.lite.yaml down
docker compose -f docker-compose.lite.yaml logs -f lite
docker compose -f docker-compose.lite.yaml logs -f temporal
# docker compose -f docker-compose.lite.yaml logs postgres
docker compose -f docker-compose.lite.yaml logs -f clickhouse-server
# docker compose -f docker-compose.lite.yaml down -v
# docker compose -f docker-compose.lite.yaml up --build
# docker compose -f docker-compose.lite.yaml up -d
docker-compose -f docker-compose.lite.yaml up -d --force-recreate
docker compose -f docker-compose.lite.yaml up -d --force-recreate
docker compose -f docker-compose.lite.yaml up temporal -d --force-recreate
docker compose -f docker-compose.lite.yaml up clickhouse-server -d --force-recreate
docker compose -f docker-compose.lite.yaml up lite -d --force-recreate
# ssh tunnel
ssh -N -L 3000:127.0.0.1:3000 user@xx.xx.xx.xx
Results
The final setup was a success! We now have a locally hosted Postgres database powering the Dittofeed server, providing us with a robust, open-source CRM on our platform. This configuration gives us the control and flexibility we needed while maintaining the scalability and usability of Dittofeed.
This was a great learning experience, and the outcome is a powerful CEM solution customized to our infrastructure, ready to drive customer engagement efficiently.
Here’s a robust nginx
configuration file to serve https://df.silkrouteadvisors.com
with reverse proxying to port 3000
. It ensures security, performance, and error handling, and sets up HTTPS with optional caching improvements.
Steps:
Make sure you have
nginx
installed.Install an SSL certificate for
df.silkrouteadvisors.com
. You can use Certbot for Let's Encrypt:sudo apt update sudo apt install certbot python3-certbot-nginx sudo certbot --nginx -d df.silkrouteadvisors.com
Use the following configuration file:
/etc/nginx/sites-available/df.silkrouteadvisors.com
server {
listen 80;
server_name df.silkrouteadvisors.com;
# Redirect all HTTP requests to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name df.silkrouteadvisors.com;
# SSL Configuration (Adjust if using Let's Encrypt or other SSL providers)
ssl_certificate /etc/letsencrypt/live/df.silkrouteadvisors.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/df.silkrouteadvisors.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
ssl_ciphers HIGH:!aNULL:!MD5;
# Security Headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
# Proxy settings
location / {
proxy_pass <http://localhost:3000>;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
# Error Handling
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
# Optional: Gzip Compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml+rss text/javascript;
gzip_vary on;
# Access and Error Logs
access_log /var/log/nginx/df.silkrouteadvisors.com.access.log;
error_log /var/log/nginx/df.silkrouteadvisors.com.error.log;
}
Explanation:
- Redirect HTTP to HTTPS: Ensures all traffic is encrypted.
- SSL Configuration: Uses Let's Encrypt certificates and enforces secure protocols.
- Security Headers: Protect against common vulnerabilities.
- Reverse Proxy: Sends requests to your application running on port
3000
. - Error Handling: Routes server errors to a custom page.
- Gzip Compression: Improves loading speed for static files.
- Logging: Captures both access and error logs for easier troubleshooting.
Enabling the Configuration:
Link the site configuration:
sudo ln -s /etc/nginx/sites-available/df.silkrouteadvisors.com /etc/nginx/sites-enabled/
Test the
nginx
configuration:sudo nginx -t
Reload
nginx
:sudo systemctl reload nginx
This configuration ensures that https://df.silkrouteadvisors.com
properly serves content from your backend on port 3000
with secure, efficient, and optimized settings.
Setup systemd
Service to Add Logging (DELETE)
Modify the service to enable better logging to diagnose the issue.
sudo nano /etc/systemd/system/docker-compose-dittofeed.service
Update the [Service]
section:
[Service]
WorkingDirectory=/home/isha/repos/dittofeed
ExecStart=/usr/bin/docker compose -f docker-compose.lite.yaml up -d
ExecStop=/usr/bin/docker compose -f docker-compose.lite.yaml down
Restart=always
TimeoutStartSec=0
User=isha
Group=isha
StandardOutput=journal
StandardError=journal
Save and reload systemd
:
sudo systemctl daemon-reload
sudo systemctl restart docker-compose-dittofeed.service
sudo systemctl status docker-compose-dittofeed.service
Review logs:
journalctl -u docker-compose-dittofeed.service -f
Validate Docker and System Services
Check Docker status:
sudo systemctl status docker
Ensure that Docker is running properly:
sudo systemctl start docker
Reload systemd
and try again:
sudo systemctl daemon-reload
sudo systemctl restart docker-compose-dittofeed.service
Ditto Feed Managing Events
Dittofeed events are asynchronous. So expect some time before they reflected in the dashboard. Ensure that the parameters are validated as per specifications in the API calls.
For example, the track
event is documented here
Similarly, few more API docs are:
- Programmatically handle user subscriptions
identify
events:
const { v4: uuidv4 } = require('uuid'); // Import the uuid library for generating unique IDs
const fetch = require('node-fetch'); // Import fetch for making HTTP requests
const options = {
method: 'POST',
headers: {
PublicWriteKey: 'ZTkwMzBhOTgtZWU4Ni00YjMyLWI0NTMtMzFmNTY2ZjUyMjRlOmM4NmI3ZDczMjNjYWIwMTU=',
authorization: 'Bearer ZTkwMzBhOTgtZWU4Ni00YjMyLWI0NTMtMzFmNTY2ZjUyMjRlOmM4NmI3ZDczMjNjYWIwMTU=',
'Content-Type': 'application/json'
},
body: JSON.stringify({
userId: "4f40e10c-2a45-4215-8f6b-51c01c06b010", // Generates a unique userId
messageId: uuidv4(), // Generates a unique messageId
traits: {
email: "tas@silkrouteadvisors.com"
}
})
};
fetch('https://df.silkrouteadvisors.com/api/public/apps/identify', options)
.then(async (response) => {
if (response.status === 204) {
console.log("Request successful with no content (204 No Content)");
} else if (response.ok) {
try {
const jsonResponse = await response.json();
console.log("Response JSON:", jsonResponse);
} catch (error) {
console.log("Empty or invalid JSON response");
}
} else {
console.error("Request failed with status:", response.status);
}
})
.catch(err => console.error("Fetch error:", err));
track
events:
const { v4: uuidv4 } = require('uuid'); // Import the uuid library for generating unique IDs
const fetch = require('node-fetch'); // Import fetch for making HTTP requests
const options = {
method: 'POST',
headers: {
PublicWriteKey: 'ZTkwMzBhOTgtZWU4Ni00YjMyLWI0NTMtMzFmNTY2ZjUyMjRlOmM4NmI3ZDczMjNjYWIwMTU=',
authorization: 'Bearer ZTkwMzBhOTgtZWU4Ni00YjMyLWI0NTMtMzFmNTY2ZjUyMjRlOmM4NmI3ZDczMjNjYWIwMTU=',
'Content-Type': 'application/json'
},
body: JSON.stringify({
userId: "4f40e10c-2a45-4215-8f6b-51c01c06b010",
messageId: uuidv4(), // Generates a unique messageId
type: "track",
event: "DFSubscriptionChange",
properties: {
subscriptionId: "571ecac9-780c-49ae-9730-b03f384a508d",
action: "Subscribe"
}
})
};
fetch('https://df.silkrouteadvisors.com/api/public/apps/track', options)
.then(async (response) => {
if (response.status === 204) {
console.log("Track event successful with no content (204 No Content)");
} else if (response.ok) {
try {
const jsonResponse = await response.json();
console.log("Response JSON:", jsonResponse);
} catch (error) {
console.log("Empty or invalid JSON response");
}
} else {
console.error("Track event failed with status:", response.status);
}
})
.catch(err => console.error("Fetch error:", err));