PROFILES.YML
Located at ~/.dvt/profiles.yml (falls back to ~/.dbt/profiles.yml). This file defines all your database connections and selects the default target.
CORE CONCEPT: DEFAULT TARGET
DVT works like dbt — your project has one primary default target of a specific adapter type. This is the engine where pushdown models execute their SQL. The target: key at the top level selects which output is the default.
You can define multiple outputs of the same type and host under the default target. These represent target environments — the same concept as dbt. For example, a dev and prod output both pointing to Snowflake on the same account.
All other outputs with different types or hosts act as external connections. DVT can extract data from these sources into the default target via the federation pipeline (Sling + DuckDB), or you can push models directly to them with config(target='...').
EXAMPLE
my_project:
target: pg_dev # ← active default target
outputs:
# ─── Target Environments (same engine, safe to switch) ───
pg_dev: # dev environment
type: postgres
host: db.internal.com
port: 5432
user: analyst
password: "{{ env_var('PG_PASSWORD') }}"
dbname: analytics_dev
schema: public
threads: 4
pg_prod: # prod environment (same engine)
type: postgres
host: db.internal.com
port: 5432
user: dvt_service
password: "{{ env_var('PG_PROD_PASSWORD') }}"
dbname: analytics_prod
schema: public
threads: 8
# ─── External Connections (different engines) ───
mysql_ops: # MySQL operational database
type: mysql
host: mysql.internal.com
port: 3306
user: readonly
password: "{{ env_var('MYSQL_PASSWORD') }}"
schema: operations
sf_warehouse: # Snowflake data warehouse
type: snowflake
account: xy12345.us-east-1
user: DVT_USER
password: "{{ env_var('SF_PASSWORD') }}"
database: PROD_DB
schema: RAW
warehouse: COMPUTE_WH
data_lake: # S3 bucket
type: s3
bucket: company-data-lake
region: us-east-1
access_key_id: "{{ env_var('AWS_ACCESS_KEY_ID') }}"
secret_access_key: "{{ env_var('AWS_SECRET_ACCESS_KEY') }}"
format: parquetSWITCHING THE DEFAULT TARGET
You can switch the default target with --target on the CLI or by changing the target: value in profiles.yml. But not all switches are equal:
Same adapter, same host
Harmless. Both outputs are the same engine on the same host. All models work unchanged. This is the standard dbt workflow for promoting across environments.
Same adapter, different host
Harmless. Same SQL dialect, different server. All pushdown models work unchanged. Use this for migrating between database instances.
Different adapter type
Breaking change. Pushdown models are written in the old engine's SQL dialect and will fail on the new engine. The entire project's pushdown models would need to be refactored for the new engine's syntax. Extraction models (DuckDB SQL) are unaffected.
# Safe: environment switch (same engine) dvt run --target pg_prod # Safe: target migration (same engine, different host) dvt run --target pg_staging # Dangerous: engine shift (different adapter type!) # Pushdown models written in PostgreSQL SQL will fail on Snowflake dvt run --target sf_warehouse
DVT emits a DVT007 warning when --target changes the adapter type, but does not block execution. Extraction models (DuckDB SQL) are always safe across engine switches — only pushdown models are affected.
TARGET RESOLUTION
DVT resolves the target for each model in this priority order:
MODEL CONFIG EXTENSIONS
Standard dbt config plus DVT extensions:
-- Standard dbt config (works unchanged)
{{ config(
materialized='incremental',
unique_key='id',
schema='analytics',
tags=['finance']
) }}
-- DVT extension: target override
-- Materializes to a different output than the default
{{ config(
materialized='table',
target='sf_warehouse'
) }}
-- DVT extension: bucket target with format
{{ config(
materialized='table',
target='data_lake',
format='delta' -- parquet, delta, csv, json, jsonl, avro
) }}ENVIRONMENT VARIABLES
Use {{ env_var('VAR_NAME') }} in profiles.yml to reference environment variables. Never hardcode credentials.
| VARIABLE | PURPOSE |
|---|---|
| DVT_PROFILES_DIR | Override profiles.yml location |
| DVT_CACHE_DIR | Override DuckDB cache directory |
| SLING_THREADS | Number of parallel Sling extractions |