Configuration .hive.yml¶
The .hive.yml file is the only thing you need to work with Hive. It describes your service.
Full example¶
name: my-service
namespace: my-team
port: 8080
path: .
healthPath: /healthz
lifecycle: standard
builder: paketo
context: ..
skipTest: false
env:
- name: DATABASE_URL
value: postgres://localhost/mydb
- name: LOG_LEVEL
value: info
testEnv:
XM_PG_CONNECTIONSTRING: "Host=localhost"
scaling:
minScale: 2
maxScale: 20
customDomains:
- api.example.com
storage:
database: true
buildArgs:
NODE_ENV: production
GITHUB_TOKEN: ${GITHUB_TOKEN}
Field reference¶
| Field | Type | Default | Description |
|---|---|---|---|
name |
string | directory name | Service name. Used in URL, ArgoCD, registry |
namespace |
string | project name from git remote | Kubernetes namespace for deployment |
port |
int | 8080 |
Port the service listens on for HTTP |
path |
string | . |
Path to source code relative to .hive.yml |
healthPath |
string | / |
Endpoint for health checks (startup, readiness, liveness) |
lifecycle |
string | standard |
Lifecycle policy for probes. See Lifecycle Policies |
builder |
string | paketo |
Build method: paketo (Cloud Native Buildpacks) or docker (Dockerfile) |
context |
string | (empty) | Build context directory relative to service dir. See Build Context |
skipTest |
bool | false |
Skip hive test and remove the test job from CI pipeline. See Skipping Tests |
env |
list | [] |
Environment variables |
testEnv |
object | {} |
Environment variables for hive test only. See Test Environment |
scaling |
object | minScale: 1, maxScale: 10 |
Knative autoscaling parameters |
customDomains |
list | [] |
Custom domain names mapped to the service via Knative DomainMapping |
storage |
object | {} |
Managed storage attachments. See Storage |
buildArgs |
object | {} |
Build arguments passed to Docker (--build-arg) or Buildpacks (--env) |
How defaults are resolved¶
name¶
Priority:
- Value from
.hive.yml - Environment variable
HIVE_SERVICE_NAME - If
.hive.ymlis at the repository root — project name from git remote - Name of the directory containing
.hive.yml
my-repo/
services/
auth-service/
.hive.yml # name → "auth-service" (directory name)
.hive.yml # name → "my-repo" (project name from git remote)
namespace¶
Priority:
- Value from
.hive.yml - Environment variable
HIVE_NAMESPACE - Project name from git remote URL
Example: for git@lab.xmonetize.net:infrastructure/hive/hive-examples.git the default namespace = hive-examples.
port¶
Default is 8080. Used for:
- containerPort in Knative Service
- Health check probes (httpGet)
hive test(container health verification)
healthPath¶
Default is /. The endpoint checked by:
- Startup probe — waits for the service to start
- Readiness probe — determines readiness to accept traffic
- Liveness probe — checks the service is alive
Must return HTTP 2xx. Nothing special required — any endpoint will do.
Environment variables¶
Three ways to set env vars (in priority order):
1. CLI flag --env (highest)¶
2. Environment variables HIVE_ENV_*¶
The HIVE_ENV_ prefix is stripped: HIVE_ENV_FOO → FOO.
3. .hive.yml (lowest)¶
env:
- name: LOG_LEVEL
value: info
- name: DATABASE_URL
value: ${XM_PG_CONNECTIONSTRING} # resolved from environment at deploy time
Values containing ${VAR} are resolved from the current environment when hive deploy runs (same syntax as buildArgs).
Merge, not replace
All three sources are merged. CLI --env overrides values from HIVE_ENV_*, which override values from .hive.yml. Variable resolution (${VAR}) happens after merging.
Scaling¶
Knative automatically scales the service within these limits based on load.
Scale to zero
minScale: 0 means the service can be stopped when there's no traffic. The first request after idle will have a delay (cold start).
Custom Domains¶
You can map custom domain names to your service using the customDomains field. Hive creates a Knative DomainMapping for each domain and provisions TLS certificates automatically via cert-manager.
Setup¶
- Add the domain to
customDomainsin.hive.yml - Create a CNAME or A record in your DNS provider pointing the domain to the external gateway (e.g.,
CNAME api.example.com → external-gateway.svcik.org) - Deploy with
hive deploy— the DomainMapping and TLS certificate will be created automatically
TLS certificates
TLS certificates are auto-provisioned by cert-manager. The first request after adding a new domain may take up to 60-120 seconds while the certificate is being issued.
Storage¶
Hive provides managed storage attachments. PostgreSQL is supported today. You request storage in .hive.yml; Hive provisions, manages, and injects credentials into your service via env vars.
PostgreSQL¶
What happens at deploy time:
- Hive creates a
HiveCombcustom resource (consumed by thehive-comboperator) hive-combprovisions inside the shared PostgreSQL cluster:- A user named after the service
- A database named after the service (dashes become underscores)
- A generated password
- Stores the credentials in a Kubernetes Secret named
{service-name}-db - The Knative Service wires the Secret in via
envFrom— every key becomes a container env var
Available environment variables¶
Your service receives:
| Variable | Description |
|---|---|
DATABASE_URL |
Full connection string: postgresql://user:pass@host:port/db |
PGHOST |
PostgreSQL host |
PGPORT |
PostgreSQL port |
PGUSER |
Username |
PGPASSWORD |
Password |
PGDATABASE |
Database name |
Both forms are provided — use whichever fits:
- Node.js / psycopg2 / SQLAlchemy typically read DATABASE_URL
- psql / pg_dump read PG*
Example¶
In code:
// Node.js (pg)
import { Pool } from "pg";
const pool = new Pool({ connectionString: process.env.DATABASE_URL });
Lifecycle¶
- Creation: first
hive deploywithstorage.database: trueprovisions user + database in ~10-30 seconds. - Updates: service config changes do not rebuild the database. Credentials remain stable.
- Service deletion: the database is retained by default (
deletionPolicy: Retain). Delete theHiveCombCR manually viakubectlto remove it.
Service data is not automatically backed up
Hive provides storage, but backups are the service team's responsibility. The shared PostgreSQL cluster has point-in-time recovery via CNPG at the cluster level, but logical backups of your specific database are up to you.
Shared cluster
All services with storage.database: true share one PostgreSQL cluster (managed by CloudNativePG). Isolation is per-user and per-database. One service = one database. Don't expect cross-database JOINs to work.
CronJobs¶
Hive can deploy not only long-running services but also periodic tasks (cron jobs). Set type: cronjobs and define one or more jobs with independent schedules.
Fields¶
| Field | Type | Required | Description |
|---|---|---|---|
type |
string | no | Workload type: service (default) or cronjobs |
jobs |
list | yes (if type: cronjobs) |
List of cron job definitions |
jobs[].name |
string | yes | Job name (unique within the service) |
jobs[].schedule |
string | yes | Cron schedule expression (e.g. "*/5 * * * *") |
jobs[].command |
string | yes | Command to execute |
jobs[].args |
list | no | Arguments passed to the command |
Example¶
name: my-workers
namespace: my-team
type: cronjobs
builder: docker
jobs:
- name: sync-data
schedule: "*/15 * * * *"
command: python
args: ["-m", "tasks.sync"]
- name: cleanup
schedule: "0 3 * * *"
command: python
args: ["-m", "tasks.cleanup"]
env:
- name: DATABASE_URL
value: ${DATABASE_URL}
Tests are skipped automatically
When type: cronjobs is set, the test stage is automatically skipped in the CI pipeline. CronJobs don't expose an HTTP port, so the standard health-check-based hive test does not apply.
Build Context¶
By default, the build context is the service directory (the directory containing .hive.yml). The context field lets you change this — useful in monorepos where a Dockerfile references files outside the service directory.
When context differs from the service directory, the -f Dockerfile flag is passed automatically so the builder knows where to find the Dockerfile.
When to use
Typical scenario: shared libraries or configs live at the repo root, and the Dockerfile does COPY shared/ ./shared/. Without context: .. the build would fail because those files are outside the default build context.
Skipping Tests¶
If a service cannot be tested without external infrastructure (database, third-party APIs, etc.), you can skip testing entirely:
When skipTest is enabled:
hive testskips the service- The CI pipeline has no test job — deploy depends directly on build
Warning
Use this sparingly. Skipping tests removes the safety net that prevents broken services from being deployed. Prefer adding a minimal health endpoint that can start without dependencies.
Test Environment¶
The testEnv field defines environment variables passed to the container during hive test (via docker run -e KEY=VALUE). These are not used in production — only during testing.
Use case: the service requires certain env vars (connection strings, API keys) just to start and pass health checks.
Note
testEnv does not affect env. Production environment variables are configured separately through env, HIVE_ENV_*, or --env.
Builder¶
| Value | Description |
|---|---|
paketo |
Cloud Native Buildpacks — auto-detects language from marker files (requirements.txt, package.json, go.mod, etc.) |
docker |
Uses Dockerfile in the service directory. Required for languages without Buildpack support (Rust, C, C++) |
Build Arguments¶
Build-time arguments passed to docker build --build-arg or pack build --env.
buildArgs:
NODE_ENV: production # Static value
GITHUB_TOKEN: ${GITHUB_TOKEN} # Substituted from environment variable
CI_COMMIT_SHA: # Entire value taken from env (like docker --build-arg KEY)
The build arg name does not have to match the CI variable name:
Here the Dockerfile sees XM_AI_APIKEY, but the value comes from the XM_OPENAI_API_KEY environment variable in CI.
Value resolution:
| Format | Behavior |
|---|---|
KEY: value |
Used as-is |
KEY: ${VAR} |
Substituted from os.environ["VAR"] at build time. KEY and VAR can differ |
KEY: (empty/null) |
Takes entire value from os.environ["KEY"] |
CLI override:
Priority: CLI --build-arg > .hive.yml buildArgs