Skip to content

Configuration

Datero is a containerized application that is shipped as a single Docker image. When you run it, you can provide configuration parameters to customize its behavior. This document describes the configuration options available for Datero.

Environment variables

Datero database engine is based on the official PostgreSQL database. More specifically, it is actually a PostgreSQL database with added set of FDW extensions. It supports all the environment variables that are supported by the PostgreSQL docker image .

While all the environment variables supported by the PostgreSQL image are available and will act as expected, only a few of them are relevant to Datero.

  • POSTGRES_USER
    • This optional environment variable is used in conjunction with POSTGRES_PASSWORD to set a user and its password. This variable will create the specified user with superuser power and a database with the same name. If it is not specified, then the default user of postgres will be used.
  • POSTGRES_PASSWORD - required
    • The only mandatory variable. It must not be empty or undefined. This environment variable sets the superuser password for PostgreSQL. The default superuser name is defined by the POSTGRES_USER environment variable listed above.
  • POSTGRES_DB
    • This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.

In most cases, if you want to have standard "out of the box" behavior, you will only need to set the POSTGRES_PASSWORD environment variable. That will provide you with a PostgreSQL instance with the default superuser name of postgres and the default database name of postgres.

Folders structure

Datero layout is spread over two directories: /home and /data. The /home directory contains the application code and instance runtime data. The /data directory is used for file-based data sources.

Datero root directory

Datero root directory is located in the /home directory. Do not mount anything to it. By doing so, you might overwrite its content and the application will not work as expected.

The only exception is the /home/instance/config.yaml file, which is application configuration file. You can mount your own configuration file to it. In such case, it will be used instead of the default one.

O/S layout:

/home               # app code. DO NOT mount anything to this directory.
    /instance       # app runtime data. DO NOT mount anything to this directory.
        config.yaml # app configuration file, can be mounted from the host
/data               # directory for file-based data, can be mounted from the host

Application launch

The Datero instance creation is described in details in the installation guide. A minimum and full formats of the docker run command with some explanations are shown below:

1
2
3
4
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    chumaky/datero
1
2
3
4
5
6
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    -v "$(pwd)/data:/data" \
    -v "$(pwd)/config.yaml:/home/instance/config.yaml" \
    chumaky/datero
1
2
3
4
5
6
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    -v "$(pwd)/data:/data" \
    -v "$(pwd)/config.yaml:/home/instance/config.yaml" \
    chumaky/datero
1
2
3
4
5
6
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    -v "$(pwd)/data:/data" \
    -v "$(pwd)/config.yaml:/home/instance/config.yaml" \
    chumaky/datero
1
2
3
4
5
6
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    -v "$(pwd)/data:/data" \
    -v "$(pwd)/config.yaml:/home/instance/config.yaml" \
    chumaky/datero
1
2
3
4
5
6
docker run -d --name datero \
    -p 80:80 -p 5432:5432 \
    -e POSTGRES_PASSWORD=postgres \
    -v "$(pwd)/data:/data" \
    -v "$(pwd)/config.yaml:/home/instance/config.yaml" \
    chumaky/datero

Configuration file

You can provide a custom configuration file to the Datero instance. It allows you to specify database connection parameters, data sources, and other settings. By using it you could automate datasources creation and avoid manual configuration.

The default configuration file is located at the /home/instance/config.yaml. To override it, you can mount your own configuration file to that path.

It supports two main sections: postgres and servers. Both are optional and can be omitted if you are satisfied with the default settings.

# Datero configuration file
postgres:
servers:

Postgres section

The postgres section is used to define the connection parameters to the underlying PostgreSQL database. They are used by the Datero backend API.

It consists of the following parameters:

postgres:
  hostname:
  port:
  database:
  username:
  password:

Default values are:

postgres:
  hostname: localhost
  port: 5432
  database: postgres
  username: postgres
  password: postgres

What this mean is that if you do not provide a custom configuration file, the Datero backend will connect to the PostgreSQL database running on the same host, using the default superuser name and password.

For all inclusive Datero image the backend API and the database are running in the same container. Postgres default security settings is set to trust mode for all connections initiated from localhost. This means, that in case of all inclusive Datero image the backend API can connect to the database without providing a password. And password field in the postgres section of the configuration file is not actually used.

But all the other params are used and have an impact. If you specified some non-default values in environment variables during container creation, like POSTGRES_USER or POSTGRES_DB, AND you use configuration file with postgres section specified, then you must provide the same values in the corresponding attributes.

Servers section

The servers section is used to define the data sources that Datero will use. It's a list where each entry defines a single data source. It has user defined name and a set of parameters that are specific to the data source type. Datasource type is defined by the underlying FDW extension.

Server definition starts with its name as an entry key. One mandatory parameter is fdw_name that defines the FDW extension to be used. The description parameter is optional and can be used to provide a human-readable description of the data source.

servers:
  server_name:
    description:
    fdw_name:

For the fdw_name parameter value you can use one of the following values:

  • mysql_fdw
  • postgres_fdw
  • mongo_fdw
  • oracle_fdw
  • tds_fdw
  • sqlite_fdw
  • duckdb_fdw
  • file_fdw
  • redis_fdw

This is the list of the currently supported FDW extensions in Datero. Every extension has its own set of parameters. But all of them adhere to the Postgres FDW specification. It's YAML based and has a common structure for all FDWs.

In short, it consists of the following sections:

  • foreign_server
  • user_mapping
  • import_foreign_schema
  • create_foreign_table
  • foreign_table_column

Only foreign_server section is required. In a sense that you must create a foreign server to be able to use the FDW. All other sections are optional and subject to the specific FDW implementation. Each section can have zero, one or set of options that can be used to configure the FDW.

Below is a full specification for every FDW extension supported by Datero. You can use it as a reference when creating your own configuration file.

FDW specification
name: mysql_fdw
version: 2.9.1
source: https://github.com/EnterpriseDB/mysql_fdw


foreign_server:
  host:
    required: false
    default: 127.0.0.1
    data_type: string
    description: Address or hostname of the MySQL server. Defaults to 127.0.0.1
  port:
    required: false
    default: 3306
    data_type: integer
    description: Port number of the MySQL server. Defaults to 3306
  secure_auth:
    required: false
    default: true
    data_type: boolean
    description: Enable or disable secure authentication. Default is true.
  init_command:
    required: false
    data_type: string
    description: SQL statement to execute when connecting to the MySQL server.
  use_remote_estimate:
    required: false
    default: false
    data_type: boolean
    description: >
      Controls whether mysql_fdw issues remote EXPLAIN commands to obtain cost estimates.
      Default is false.
  reconnect:
    required: false
    default: false
    data_type: boolean
    description: >
      Enable or disable automatic reconnection to the MySQL server
      if the existing connection is found to have been lost. Default is false.
  sql_mode:
    required: false
    default: ANSI_QUOTES
    data_type: string
    description: Set MySQL sql_mode for established connection. Default is ANSI_QUOTES.
  ssl_key:
    required: false
    data_type: string
    description: The path name of the client private key file.
  ssl_cert:
    required: false
    data_type: string
    description: The path name of the client public key certificate file.
  ssl_ca:
    required: false
    data_type: string
    description: >
      The path name of the Certificate Authority (CA) certificate file. 
      This option, if used, must specify the same certificate used by the server.
  ssl_capath:
    required: false
    data_type: string
    description: The path name of the directory that contains trusted SSL CA certificate files.
  ssl_cipher:
    required: false
    data_type: string
    description: The list of permissible ciphers for SSL encryption.
  fetch_size:
    required: false
    default: 100
    data_type: integer
    description: >
      This option specifies the number of rows mysql_fdw should get in each fetch operation. 
      It can be specified for a foreign table or a foreign server. 
      The option specified on a table overrides an option specified for the server. 
      The default is 100.
  character_set:
    required: false
    default: auto
    data_type: string
    description: >
      The character set to use for MySQL connection. 
      Default is auto which means autodetect based on the operating system setting. 
      Before the introduction of the character_set option, the character set was set similar to the PostgreSQL database encoding. 
      To get this older behavior set the character_set to special value PGDatabaseEncoding.
  mysql_default_file:
    required: false
    data_type: string
    description: >
      Set the MySQL default file path if connection details, such as username, password, etc., need to be picked from the default file.


user_mapping:
  username:
    required: true
    data_type: string
    description: Username to use when connecting to MySQL.
  password:
    required: true
    data_type: string
    description: Password to authenticate to the MySQL server with.


import_foreign_schema:
  import_default:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether column DEFAULT expressions are included in the definitions
      of foreign tables imported from a foreign server. The default is false.
  import_not_null:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether column NOT NULL constraints are included in the definitions
      of foreign tables imported from a foreign server. The default is true.
  import_enum_as_text:
    required: false
    default: false
    data_type: boolean
    description: >
      This option can be used to map MySQL ENUM type to TEXT type in the definitions
      of foreign tables, otherwise emit a warning for type to be created. The default is false.
  import_generated:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether GENERATED column expressions are included in the definitions
      of foreign tables imported from a foreign server or not. The default is true.
      The IMPORT will fail altogether if an imported generated expression
      uses a function or operator that does not exist on PostgreSQL.


create_foreign_table:
  dbname:
    required: true
    data_type: string
    description: Name of the MySQL database to query. This is a mandatory option.
  table_name:
    required: false
    data_type: string
    description: Name of the MySQL table, default is the same as foreign table.
  max_blob_size:
    required: false
    data_type: integer
    description: Max blob size to read without truncation.
  fetch_size:
    required: false
    default: 100
    data_type: integer
    description: >
      This option specifies the number of rows mysql_fdw should get in each fetch operation.
      It can be specified for a foreign table or a foreign server.
      The option specified on a table overrides an option specified for the server.
      The default is 100.
name: postgres_fdw
version: 16.2.0
source: https://www.postgresql.org/docs/current/postgres-fdw.html


foreign_server:
  host:
    required: false
    default: localhost
    data_type: string
    description: >
      Name of host to connect to. 
      If a host name looks like an absolute path name, it specifies Unix-domain communication rather than TCP/IP communication; 
      the value is the name of the directory in which the socket file is stored. 
      (On Unix, an absolute path name begins with a slash. 
      On Windows, paths starting with drive letters are also recognized.) 
      If the host name starts with @, it is taken as a Unix-domain socket in the abstract namespace (currently supported on Linux and Windows). 
      The default behavior when host is not specified, or is empty, 
      is to connect to a Unix-domain socket in /tmp (or whatever socket directory was specified when PostgreSQL was built). 
      On Windows, the default is to connect to localhost.

      A comma-separated list of host names is also accepted, in which case each host name in the list is tried in order; 
      an empty item in the list selects the default behavior as explained above.     
  hostaddr:
    required: false
    data_type: string
    description: >
      Numeric IP address of host to connect to. 
      This should be in the standard IPv4 address format, e.g., 172.28.40.9. 
      If your machine supports IPv6, you can also use those addresses. 
      TCP/IP communication is always used when a nonempty string is specified for this parameter. 
      If this parameter is not specified, the value of host will be looked up to find the corresponding IP address — 
      or, if host specifies an IP address, that value will be used directly.

      Using hostaddr allows the application to avoid a host name look-up, 
      which might be important in applications with time constraints. 
      However, a host name is required for GSSAPI or SSPI authentication methods, 
      as well as for verify-full SSL certificate verification. The following rules are used:

      - If host is specified without hostaddr, a host name lookup occurs. 
        (When using PQconnectPoll, the lookup occurs when PQconnectPoll first considers this host name, 
        and it may cause PQconnectPoll to block for a significant amount of time.)

      - If hostaddr is specified without host, the value for hostaddr gives the server network address. 
        The connection attempt will fail if the authentication method requires a host name.

      - If both host and hostaddr are specified, the value for hostaddr gives the server network address. 
        The value for host is ignored unless the authentication method requires it, in which case it will be used as the host name.

      Note that authentication is likely to fail if host is not the name of the server at network address hostaddr. 
      Also, when both host and hostaddr are specified, host is used to identify the connection in a password file (see Section 34.16).

      A comma-separated list of hostaddr values is also accepted, in which case each host in the list is tried in order. 
      An empty item in the list causes the corresponding host name to be used, or the default host name if that is empty as well. 
      See Section 34.1.1.3 for details.

      Without either a host name or host address, libpq will connect using a local Unix-domain socket; 
      or on Windows, it will attempt to connect to localhost.
  port:
    required: false
    data_type: string
    description: >
      Port number to connect to at the server host, or socket file name extension for Unix-domain connections. 
      If multiple hosts were given in the host or hostaddr parameters, 
      this parameter may specify a comma-separated list of ports of the same length as the host list, 
      or it may specify a single port number to be used for all hosts. 
      An empty string, or an empty item in a comma-separated list, 
      specifies the default port number established when PostgreSQL was built.    
  dbname:
    required: false
    data_type: string
    description: >
      The database name. Defaults to be the same as the user name. 
      In certain contexts, the value is checked for extended formats; 
      see Section 34.1.1 for more details on those.
  passfile:
    required: false
    data_type: string
    description: >
      Specifies the name of the file used to store passwords (see Section 34.16). 
      Defaults to ~/.pgpass, or %APPDATA%\postgresql\pgpass.conf on Microsoft Windows. 
      (No error is reported if this file does not exist.)
  require_auth:
    required: false
    data_type: string
    description: >
      Specifies the authentication method that the client requires from the server. 
      If the server does not use the required method to authenticate the client, 
      or if the authentication handshake is not fully completed by the server, 
      the connection will fail. A comma-separated list of methods may also be provided, 
      of which the server must use exactly one in order for the connection to succeed. 
      By default, any authentication method is accepted, 
      and the server is free to skip authentication altogether.

      Methods may be negated with the addition of a ! prefix, 
      in which case the server must not attempt the listed method; 
      any other method is accepted, 
      and the server is free not to authenticate the client at all. 
      If a comma-separated list is provided, 
      the server may not attempt any of the listed negated methods. 
      Negated and non-negated forms may not be combined in the same setting.

      As a final special case, the none method requires the server not to use an authentication challenge. 
      (It may also be negated, to require some form of authentication.)

      The following methods may be specified:

      - password: The server must request plaintext password authentication.
      - md5: The server must request MD5 hashed password authentication.
      - gss: The server must either request a Kerberos handshake via GSSAPI 
        or establish a GSS-encrypted channel (see also gssencmode).
      - sspi: The server must request Windows SSPI authentication.
      - scram-sha-256: The server must successfully complete a SCRAM-SHA-256 authentication exchange with the client.
      - none: The server must not prompt the client for an authentication exchange. 
        (This does not prohibit client certificate authentication via TLS, 
        nor GSS authentication via its encrypted transport.)
  channel_binding:
    required: false
    data_type: string
    description: >
      This option controls the client's use of channel binding. 
      A setting of require means that the connection must employ channel binding, 
      prefer means that the client will choose channel binding if available, 
      and disable prevents the use of channel binding. 
      The default is prefer if PostgreSQL is compiled with SSL support; 
      otherwise the default is disable.

      Channel binding is a method for the server to authenticate itself to the client. 
      It is only supported over SSL connections with PostgreSQL 11 or later servers using the SCRAM authentication method.
  connect_timeout:
    required: false
    data_type: string
    description: >
      Maximum time to wait while connecting, in seconds (write as a decimal integer, e.g., 10). 
      Zero, negative, or not specified means wait indefinitely. 
      The minimum allowed timeout is 2 seconds, therefore a value of 1 is interpreted as 2. 
      This timeout applies separately to each host name or IP address. 
      For example, if you specify two hosts and connect_timeout is 5, 
      each host will time out if no connection is made within 5 seconds, 
      so the total time spent waiting for a connection might be up to 10 seconds.
  options:
    required: false
    data_type: string
    description: >
      Specifies command-line options to send to the server at connection start. 
      For example, setting this to -c geqo=off sets the session's value of the geqo parameter to off. 
      Spaces within this string are considered to separate command-line arguments, 
      unless escaped with a backslash (\); write \\ to represent a literal backslash. 
      For a detailed discussion of the available options, consult Chapter 20.
  application_name:
    required: false
    data_type: string
    description: >
      Specifies a value for the application_name configuration parameter.
      This may appear in either or both a connection and postgres_fdw.application_name. 
      If both are present, postgres_fdw.application_name overrides the connection setting. 
      Unlike libpq, postgres_fdw allows application_name to include “escape sequences”. 
      See postgres_fdw.application_name for details.
  fallback_application_name:
    required: false
    data_type: string
    description: >
      Readonly. Always set to postgres_fdw.
  keepalives:
    required: false
    data_type: string
    description: >
      Controls whether client-side TCP keepalives are used. 
      The default value is 1, meaning on, but you can change this to 0, meaning off, if keepalives are not wanted. 
      This parameter is ignored for connections made via a Unix-domain socket.
  keepalives_idle:
    required: false
    data_type: string
    description: >
      Controls the number of seconds of inactivity after which TCP should send a keepalive message to the server. 
      A value of zero uses the system default. 
      This parameter is ignored for connections made via a Unix-domain socket, 
      or if keepalives are disabled. 
      It is only supported on systems where TCP_KEEPIDLE or an equivalent socket option is available, and on Windows; 
      on other systems, it has no effect.
  keepalives_interval:
    required: false
    data_type: string
    description: >
      Controls the number of seconds after which a TCP keepalive message that is not acknowledged by the server should be retransmitted. 
      A value of zero uses the system default. 
      This parameter is ignored for connections made via a Unix-domain socket, 
      or if keepalives are disabled. 
      It is only supported on systems where TCP_KEEPINTVL or an equivalent socket option is available, and on Windows; 
      on other systems, it has no effect.
  keepalives_count:
    required: false
    data_type: string
    description: >
      Controls the number of TCP keepalives that can be lost before the client's connection to the server is considered dead. 
      A value of zero uses the system default. 
      This parameter is ignored for connections made via a Unix-domain socket, 
      or if keepalives are disabled. 
      It is only supported on systems where TCP_KEEPCNT or an equivalent socket option is available; 
      on other systems, it has no effect.
  tcp_user_timeout:
    required: false
    data_type: string
    description: >
      Controls the number of milliseconds that transmitted data may remain unacknowledged before a connection is forcibly closed. 
      A value of zero uses the system default. 
      This parameter is ignored for connections made via a Unix-domain socket. 
      It is only supported on systems where TCP_USER_TIMEOUT is available; 
      on other systems, it has no effect.
  replication:
    required: false
    data_type: string
    description: >
      This option determines whether the connection should use the replication protocol instead of the normal protocol. 
      This is what PostgreSQL replication connections as well as tools such as pg_basebackup use internally, 
      but it can also be used by third-party applications. 
      For a description of the replication protocol, consult Section 55.4.

      The following values, which are case-insensitive, are supported:

      - true, on, yes, 1
        The connection goes into physical replication mode.
      - database
        The connection goes into logical replication mode, connecting to the database specified in the dbname parameter.
      - false, off, no, 0
        The connection is a regular one, which is the default behavior.

      In physical or logical replication mode, only the simple query protocol can be used.
  gssencmode:
    required: false
    data_type: string
    description: >
      This option determines whether or with what priority a secure GSS TCP/IP connection will be negotiated with the server. 
      There are three modes:

      - disable
        only try a non-GSSAPI-encrypted connection
      - prefer (default)
        if there are GSSAPI credentials present (i.e., in a credentials cache), 
        first try a GSSAPI-encrypted connection; 
        if that fails or there are no credentials, try a non-GSSAPI-encrypted connection. 
        This is the default when PostgreSQL has been compiled with GSSAPI support.
      - require
        only try a GSSAPI-encrypted connection

      gssencmode is ignored for Unix domain socket communication. 
      If PostgreSQL is compiled without GSSAPI support, using the require option will cause an error, 
      while prefer will be accepted but libpq will not actually attempt a GSSAPI-encrypted connection.
  sslmode:
    required: false
    data_type: string
    description: >
      This option determines whether or with what priority a secure SSL TCP/IP connection will be negotiated with the server. 
      There are six modes:

      - disable
        only try a non-SSL connection
      - allow
        first try a non-SSL connection; if that fails, try an SSL connection
      - prefer (default)
        first try an SSL connection; if that fails, try a non-SSL connection
      - require
        only try an SSL connection. If a root CA file is present, verify the certificate in the same way as if verify-ca was specified
      - verify-ca
        only try an SSL connection, and verify that the server certificate is issued by a trusted certificate authority (CA)
      - verify-full
        only try an SSL connection, verify that the server certificate is issued by a trusted CA and that the requested server host name matches that in the certificate

      See Section 34.19 for a detailed description of how these options work.

      sslmode is ignored for Unix domain socket communication. 
      If PostgreSQL is compiled without SSL support, using options require, verify-ca, or verify-full will cause an error, 
      while options allow and prefer will be accepted but libpq will not actually attempt an SSL connection.

      Note that if GSSAPI encryption is possible, that will be used in preference to SSL encryption, 
      regardless of the value of sslmode. 
      To force use of SSL encryption in an environment that has working GSSAPI infrastructure (such as a Kerberos server), 
      also set gssencmode to disable.
  requiressl:
    required: false
    data_type: string
    description: >
      This option is deprecated in favor of the sslmode setting.

      If set to 1, an SSL connection to the server is required (this is equivalent to sslmode require). 
      libpq will then refuse to connect if the server does not accept an SSL connection. 
      If set to 0 (default), libpq will negotiate the connection type with the server (equivalent to sslmode prefer). 
      This option is only available if PostgreSQL is compiled with SSL support.
  sslcompression:
    required: false
    data_type: string
    description: >
      If set to 1, data sent over SSL connections will be compressed. 
      If set to 0, compression will be disabled. 
      The default is 0. 
      This parameter is ignored if a connection without SSL is made.

      SSL compression is nowadays considered insecure and its use is no longer recommended. 
      OpenSSL 1.1.0 disables compression by default, and many operating system distributions disable it in prior versions as well, 
      so setting this parameter to on will not have any effect if the server does not accept compression. 
      PostgreSQL 14 disables compression completely in the backend.

      If security is not a primary concern, compression can improve throughput if the network is the bottleneck. 
      Disabling compression can improve response time and throughput if CPU performance is the limiting factor.
  sslcert:
    required: false
    data_type: string
    description: >
      This parameter specifies the file name of the client SSL certificate, 
      replacing the default ~/.postgresql/postgresql.crt. 
      This parameter is ignored if an SSL connection is not made.

      May appear in either or both a connection and a user mapping. 
      If both are present, the user mapping setting overrides the connection setting.
  sslkey:
    required: false
    data_type: string
    description: >
      This parameter specifies the location for the secret key used for the client certificate. 
      It can either specify a file name that will be used instead of the default ~/.postgresql/postgresql.key, 
      or it can specify a key obtained from an external “engine” (engines are OpenSSL loadable modules). 
      An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. 
      This parameter is ignored if an SSL connection is not made.

      May appear in either or both a connection and a user mapping. 
      If both are present, the user mapping setting overrides the connection setting.
  sslcertmode:
    required: false
    data_type: string
    description: >
      This option determines whether a client certificate may be sent to the server, 
      and whether the server is required to request one. 
      There are three modes:

      - disable
        A client certificate is never sent, even if one is available (default location or provided via sslcert).
      - allow (default)
        A certificate may be sent, if the server requests one and the client has one to send.
      - require
        The server must request a certificate. 
        The connection will fail if the client does not send a certificate and the server successfully authenticates the client anyway.
  sslrootcert:
    required: false
    data_type: string
    description: >
      This parameter specifies the name of a file containing SSL certificate authority (CA) certificate(s). 
      If the file exists, the server's certificate will be verified to be signed by one of these authorities. 
      The default is ~/.postgresql/root.crt.

      The special value system may be specified instead, 
      in which case the system's trusted CA roots will be loaded. 
      The exact locations of these root certificates differ by SSL implementation and platform. 
      For OpenSSL in particular, the locations may be further modified by the SSL_CERT_DIR and SSL_CERT_FILE environment variables.
  sslcrl:
    required: false
    data_type: string
    description: >
      This parameter specifies the file name of the SSL server certificate revocation list (CRL). 
      Certificates listed in this file, if it exists, will be rejected while attempting to authenticate the server's certificate. 
      If neither sslcrl nor sslcrldir is set, this setting is taken as ~/.postgresql/root.crl.
  sslcrldir:
    required: false
    data_type: string
    description: >
      This parameter specifies the directory name of the SSL server certificate revocation list (CRL). 
      Certificates listed in the files in this directory, if it exists, will be rejected while attempting to authenticate the server's certificate.

      The directory needs to be prepared with the OpenSSL command openssl rehash or c_rehash. 
      See its documentation for details.

      Both sslcrl and sslcrldir can be specified together.
  sslsni:
    required: false
    data_type: string
    description: >
      If set to 1 (default), libpq sets the TLS extension “Server Name Indication” (SNI) on SSL-enabled connections. 
      By setting this parameter to 0, this is turned off.

      The Server Name Indication can be used by SSL-aware proxies to route connections without having to decrypt the SSL stream. 
      (Note that this requires a proxy that is aware of the PostgreSQL protocol handshake, not just any SSL proxy.) 
      However, SNI makes the destination host name appear in cleartext in the network traffic, so it might be undesirable in some cases.
  requirepeer:
    required: false
    data_type: string
    description: >
      This parameter specifies the operating-system user name of the server, 
      for example requirepeer=postgres. 
      When making a Unix-domain socket connection, 
      if this parameter is set, the client checks at the beginning of the connection that the server process is running under the specified user name; 
      if it is not, the connection is aborted with an error. 
      This parameter can be used to provide server authentication similar to that available with SSL certificates on TCP/IP connections. 
      (Note that if the Unix-domain socket is in /tmp or another publicly writable location, 
      any user could start a server listening there. 
      Use this parameter to ensure that you are connected to a server run by a trusted user.) 
      This option is only supported on platforms for which the peer authentication method is implemented; 
      see Section 21.9.
  ssl_min_protocol_version:
    required: false
    data_type: string
    description: >
      This parameter specifies the minimum SSL/TLS protocol version to allow for the connection. 
      Valid values are TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. 
      The supported protocols depend on the version of OpenSSL used, 
      older versions not supporting the most modern protocol versions. 
      If not specified, the default is TLSv1.2, which satisfies industry best practices as of this writing.
  ssl_max_protocol_version:
    required: false
    data_type: string
    description: >
      This parameter specifies the maximum SSL/TLS protocol version to allow for the connection. 
      Valid values are TLSv1, TLSv1.1, TLSv1.2 and TLSv1.3. 
      The supported protocols depend on the version of OpenSSL used, 
      older versions not supporting the most modern protocol versions. 
      If not set, this parameter is ignored and the connection will use the maximum bound defined by the backend, if set. 
      Setting the maximum protocol version is mainly useful for testing or if some component has issues working with a newer protocol.
  krbsrvname:
    required: false
    data_type: string
    description: >
      Kerberos service name to use when authenticating with GSSAPI. 
      This must match the service name specified in the server configuration for Kerberos authentication to succeed. 
      (See also Section 21.6.) 
      The default value is normally postgres, 
      but that can be changed when building PostgreSQL via the --with-krb-srvnam option of configure. 
      In most environments, this parameter never needs to be changed. 
      Some Kerberos implementations might require a different service name, 
      such as Microsoft Active Directory which requires the service name to be in upper case (POSTGRES).
  gsslib:
    required: false
    data_type: string
    description: >
      GSS library to use for GSSAPI authentication. 
      Currently this is disregarded except on Windows builds that include both GSSAPI and SSPI support. 
      In that case, set this to gssapi to cause libpq to use the GSSAPI library for authentication instead of the default SSPI.
  gssdelegation:
    required: false
    data_type: string
    description: >
      Forward (delegate) GSS credentials to the server. 
      The default is 0 which means credentials will not be forwarded to the server. 
      Set this to 1 to have credentials forwarded when possible.
  service:
    required: false
    data_type: string
    description: >
      Service name to use for additional parameters. 
      It specifies a service name in pg_service.conf that holds additional connection parameters. 
      This allows applications to specify only a service name so connection parameters can be centrally maintained. 
      See Section 34.17.
  target_session_attrs:
    required: false
    data_type: string
    description: >
      This option determines whether the session must have certain properties to be acceptable. 
      It's typically used in combination with multiple host names to select the first acceptable alternative among several hosts. 
      There are six modes:

      - any (default)
        any successful connection is acceptable
      - read-write
        session must accept read-write transactions by default (that is, the server must not be in hot standby mode and the default_transaction_read_only parameter must be off)
      - read-only
        session must not accept read-write transactions by default (the converse)
      - primary
        server must not be in hot standby mode
      - standby
        server must be in hot standby mode
      - prefer-standby
        first try to find a standby server, but if none of the listed hosts is a standby server, try again in any mode
  load_balance_hosts:
    required: false
    data_type: string
    description: >
      Controls the order in which the client tries to connect to the available hosts and addresses. 
      Once a connection attempt is successful no other hosts and addresses will be tried. 
      This parameter is typically used in combination with multiple host names or a DNS record that returns multiple IPs. 
      This parameter can be used in combination with target_session_attrs to, for example, load balance over standby servers only. 
      Once successfully connected, subsequent queries on the returned connection will all be sent to the same server. 
      There are currently two modes:

      - disable (default)
        No load balancing across hosts is performed. 
        Hosts are tried in the order in which they are provided and addresses are tried in the order they are received from DNS or a hosts file.
      - random
        Hosts and addresses are tried in random order. 
        This value is mostly useful when opening multiple connections at the same time, 
        possibly from different machines. 
        This way connections can be load balanced across multiple PostgreSQL servers.

        While random load balancing, due to its random nature, will almost never result in a completely uniform distribution, 
        it statistically gets quite close. 
        One important aspect here is that this algorithm uses two levels of random choices: 
        First the hosts will be resolved in random order. 
        Then secondly, before resolving the next host, all resolved addresses for the current host will be tried in random order. 
        This behaviour can skew the amount of connections each node gets greatly in certain cases, 
        for instance when some hosts resolve to more addresses than others. 
        But such a skew can also be used on purpose, 
        e.g. to increase the number of connections a larger server gets by providing its hostname multiple times in the host string.

        When using this value it's recommended to also configure a reasonable value for connect_timeout. 
        Because then, if one of the nodes that are used for load balancing is not responding, a new node will be tried.
  use_remote_estimate:
    required: false
    default: false
    data_type: boolean
    description: >
      This option, which can be specified for a foreign table or a foreign server, 
      controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost estimates. 
      A setting for a foreign table overrides any setting for its server, but only for that table. The default is false.
  fdw_startup_cost:
    required: false
    default: 100
    data_type: number
    description: >
      This option, which can be specified for a foreign server, 
      is a floating point value that is added to the estimated startup cost of any foreign-table scan on that server. 
      This represents the additional overhead of establishing a connection, parsing and planning the query on the remote side, etc. 
      The default value is 100.
  fdw_tuple_cost:
    required: false
    default: 0.01
    data_type: number
    description: >
      This option, which can be specified for a foreign server, 
      is a floating point value that is used as extra cost per-tuple for foreign-table scans on that server. 
      This represents the additional overhead of data transfer between servers. 
      You might increase or decrease this number to reflect higher or lower network delay to the remote server. 
      The default value is 0.01.
  analyze_sampling:
    required: false
    default: auto
    data_type: string
    description: >
      This option, which can be specified for a foreign table or a foreign server, 
      determines if ANALYZE on a foreign table samples the data on the remote side, 
      or reads and transfers all data and performs the sampling locally. 
      The supported values are:

      - off
      - random
      - system
      - bernoulli 
      - auto (default) 

      off disables remote sampling, so all data are transferred and sampled locally.

      random performs remote sampling using the random() function to choose returned rows, 
      while `system` and `bernoulli` rely on the built-in TABLESAMPLE methods of those names. 
      random works on all remote server versions, while TABLESAMPLE is supported only since 9.5. 

      auto (the default) picks the recommended sampling method automatically; 
      currently it means either bernoulli or random depending on the remote server version.
  extensions:
    required: false
    data_type: string
    description: >
      This option is a comma-separated list of names of PostgreSQL extensions that are installed, 
      in compatible versions, on both the local and remote servers. 
      Functions and operators that are immutable and belong to a listed extension 
      will be considered shippable to the remote server. 
      This option can only be specified for foreign servers, not per-table.

      When using the extensions option, it is the user's responsibility that the listed extensions 
      exist and behave identically on both the local and remote servers. 
      Otherwise, remote queries may fail or behave unexpectedly.
  fetch_size:
    required: false
    default: 100
    data_type: integer
    description: >
      This option specifies the number of rows postgres_fdw should get in each fetch operation. 
      It can be specified for a foreign table or a foreign server. 
      The option specified on a table overrides an option specified for the server. 
      The default is 100.
  batch_size:
    required: false
    default: 1
    data_type: integer
    description: >
      This option specifies the number of rows postgres_fdw should insert in each insert operation. 
      It can be specified for a foreign table or a foreign server. 
      The option specified on a table overrides an option specified for the server. 
      The default is 1.

      Note the actual number of rows postgres_fdw inserts at once depends on the number of columns and the provided batch_size value. 
      The batch is executed as a single query, and the libpq protocol 
      (which postgres_fdw uses to connect to a remote server) limits the number of parameters in a single query to 65535. 
      When the number of columns * batch_size exceeds the limit, the batch_size will be adjusted to avoid an error.

      This option also applies when copying into foreign tables. 
      In that case the actual number of rows postgres_fdw copies at once is determined in a similar way to the insert case, 
      but it is limited to at most 1000 due to implementation restrictions of the COPY command.
  async_capable:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be scanned concurrently for asynchronous execution. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is false.

      In order to ensure that the data being returned from a foreign server is consistent, 
      postgres_fdw will only open one connection for a given foreign server and will run all queries against that server sequentially 
      even if there are multiple foreign tables involved, unless those tables are subject to different user mappings. 
      In such a case, it may be more performant to disable this option to eliminate the overhead associated with running queries asynchronously.

      Asynchronous execution is applied even when an Append node contains subplan(s) executed synchronously as well as subplan(s) executed asynchronously. 
      In such a case, if the asynchronous subplans are ones processed using postgres_fdw, 
      tuples from the asynchronous subplans are not returned until after at least one synchronous subplan returns all tuples, 
      as that subplan is executed while the asynchronous subplans are waiting for the results of asynchronous queries sent to foreign servers. 
      This behavior might change in a future release.
  parallel_commit:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether postgres_fdw commits, in parallel, 
      remote transactions opened on a foreign server in a local transaction when the local transaction is committed. 
      This setting also applies to remote and local subtransactions. 
      This option can only be specified for foreign servers, not per-table. 
      The default is false.
  parallel_abort:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether postgres_fdw aborts, in parallel, 
      remote transactions opened on a foreign server in a local transaction when the local transaction is aborted. 
      This setting also applies to remote and local subtransactions. 
      This option can only be specified for foreign servers, not per-table. 
      The default is false.
  updatable:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be modified using INSERT, UPDATE and DELETE commands. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is true.

      Of course, if the remote table is not in fact updatable, an error would occur anyway. 
      Use of this option primarily allows the error to be thrown locally without querying the remote server. 
      Note however that the information_schema views will report a postgres_fdw foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server.
  truncatable:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be truncated using the TRUNCATE command. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is true.

      Of course, if the remote table is not in fact truncatable, an error would occur anyway. 
      Use of this option primarily allows the error to be thrown locally without querying the remote server.
  keep_connections:
    required: false
    default: on
    data_type: boolean
    description: >
      This option controls whether postgres_fdw keeps the connections to the foreign server open so that subsequent queries can re-use them. 
      It can only be specified for a foreign server. 
      The default is on. 
      If set to off, all connections to this foreign server will be discarded at the end of each transaction.


user_mapping:
  user:
    required: false
    data_type: string
    description: >
      PostgreSQL user name to connect as. 
      Defaults to be the same as the operating system name of the user running the application.
  password:
    required: false
    data_type: string
    description: Password to be used if the server demands password authentication.
  sslcert:
    required: false
    data_type: string
    description: >
      This parameter specifies the file name of the client SSL certificate, 
      replacing the default ~/.postgresql/postgresql.crt. 
      This parameter is ignored if an SSL connection is not made.

      May appear in either or both a connection and a user mapping. 
      If both are present, the user mapping setting overrides the connection setting.
  sslkey:
    required: false
    data_type: string
    description: >
      This parameter specifies the location for the secret key used for the client certificate. 
      It can either specify a file name that will be used instead of the default ~/.postgresql/postgresql.key, 
      or it can specify a key obtained from an external “engine” (engines are OpenSSL loadable modules). 
      An external engine specification should consist of a colon-separated engine name and an engine-specific key identifier. 
      This parameter is ignored if an SSL connection is not made.

      May appear in either or both a connection and a user mapping. 
      If both are present, the user mapping setting overrides the connection setting.
  sslpassword:
    required: false
    data_type: string
    description: >
      This parameter specifies the password for the secret key specified in sslkey, 
      allowing client certificate private keys to be stored in encrypted form on disk even when interactive passphrase input is not practical.

      Specifying this parameter with any non-empty value suppresses the Enter PEM pass phrase: prompt that OpenSSL will emit by default when an encrypted client certificate key is provided to libpq.

      If the key is not encrypted this parameter is ignored. 
      The parameter has no effect on keys specified by OpenSSL engines unless the engine uses the OpenSSL password callback mechanism for prompts.

      There is no environment variable equivalent to this option, 
      and no facility for looking it up in .pgpass. 
      It can be used in a service file connection definition. 
      Users with more sophisticated uses should consider using OpenSSL engines and tools like PKCS#11 or USB crypto offload devices.


import_foreign_schema:
  import_collate:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether column COLLATE options are included 
      in the definitions of foreign tables imported from a foreign server. 
      The default is true. 
      You might need to turn this off if the remote server has a different set of collation names than the local server does, 
      which is likely to be the case if it's running on a different operating system. 
      If you do so, however, there is a very severe risk that the imported table columns' collations 
      will not match the underlying data, resulting in anomalous query behavior.

      Even when this parameter is set to true, importing columns whose collation is the remote server's default can be risky. 
      They will be imported with COLLATE "default", which will select the local server's default collation, which could be different.
  import_default:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether column DEFAULT expressions are included 
      in the definitions of foreign tables imported from a foreign server. 
      The default is false. 
      If you enable this option, be wary of defaults that might get computed differently on the local server 
      than they would be on the remote server; nextval() is a common source of problems. 
      The IMPORT will fail altogether if an imported default expression uses a function or operator that does not exist locally.
  import_generated:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether column GENERATED expressions are included 
      in the definitions of foreign tables imported from a foreign server. 
      The default is true. 
      The IMPORT will fail altogether if an imported generated expression uses a function or operator that does not exist locally.
  import_not_null:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether column NOT NULL constraints are included 
      in the definitions of foreign tables imported from a foreign server. 
      The default is true.


create_foreign_table:
  schema_name:
    required: false
    data_type: string
    description: >
      This option, which can be specified for a foreign table, 
      gives the schema name to use for the foreign table on the remote server. 
      If this option is omitted, the name of the foreign table's schema is used.
  table_name:
    required: false
    data_type: string
    description: >
      This option, which can be specified for a foreign table, 
      gives the table name to use for the foreign table on the remote server. 
      If this option is omitted, the foreign table's name is used.
  use_remote_estimate:
    required: false
    default: false
    data_type: boolean
    description: >
      This option, which can be specified for a foreign table or a foreign server, 
      controls whether postgres_fdw issues remote EXPLAIN commands to obtain cost estimates. 
      A setting for a foreign table overrides any setting for its server, but only for that table. The default is false.
  fetch_size:
    required: false
    default: 100
    data_type: integer
    description: >
      This option specifies the number of rows postgres_fdw should get in each fetch operation. 
      It can be specified for a foreign table or a foreign server. 
      The option specified on a table overrides an option specified for the server. 
      The default is 100.
  batch_size:
    required: false
    default: 1
    data_type: integer
    description: >
      This option specifies the number of rows postgres_fdw should insert in each insert operation. 
      It can be specified for a foreign table or a foreign server. 
      The option specified on a table overrides an option specified for the server. 
      The default is 1.

      Note the actual number of rows postgres_fdw inserts at once depends on the number of columns and the provided batch_size value. 
      The batch is executed as a single query, and the libpq protocol 
      (which postgres_fdw uses to connect to a remote server) limits the number of parameters in a single query to 65535. 
      When the number of columns * batch_size exceeds the limit, the batch_size will be adjusted to avoid an error.

      This option also applies when copying into foreign tables. 
      In that case the actual number of rows postgres_fdw copies at once is determined in a similar way to the insert case, 
      but it is limited to at most 1000 due to implementation restrictions of the COPY command.
  async_capable:
    required: false
    default: false
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be scanned concurrently for asynchronous execution. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is false.

      In order to ensure that the data being returned from a foreign server is consistent, 
      postgres_fdw will only open one connection for a given foreign server and will run all queries against that server sequentially 
      even if there are multiple foreign tables involved, unless those tables are subject to different user mappings. 
      In such a case, it may be more performant to disable this option to eliminate the overhead associated with running queries asynchronously.

      Asynchronous execution is applied even when an Append node contains subplan(s) executed synchronously as well as subplan(s) executed asynchronously. 
      In such a case, if the asynchronous subplans are ones processed using postgres_fdw, 
      tuples from the asynchronous subplans are not returned until after at least one synchronous subplan returns all tuples, 
      as that subplan is executed while the asynchronous subplans are waiting for the results of asynchronous queries sent to foreign servers. 
      This behavior might change in a future release.
  updatable:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be modified using INSERT, UPDATE and DELETE commands. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is true.

      Of course, if the remote table is not in fact updatable, an error would occur anyway. 
      Use of this option primarily allows the error to be thrown locally without querying the remote server. 
      Note however that the information_schema views will report a postgres_fdw foreign table to be updatable (or not) according to the setting of this option, without any check of the remote server.
  truncatable:
    required: false
    default: true
    data_type: boolean
    description: >
      This option controls whether postgres_fdw allows foreign tables to be truncated using the TRUNCATE command. 
      It can be specified for a foreign table or a foreign server. 
      A table-level option overrides a server-level option. 
      The default is true.

      Of course, if the remote table is not in fact truncatable, an error would occur anyway. 
      Use of this option primarily allows the error to be thrown locally without querying the remote server.


foreign_table_column:
  column_name:
    required: false
    data_type: string
    description: >
      This option, which can be specified for a column of a foreign table, 
      gives the column name to use for the column on the remote server. 
      If this option is omitted, the column's name is used.
name: mongo_fdw
version: 5.5.1
source: https://github.com/EnterpriseDB/mongo_fdw


foreign_server:
  address:
    required: false
    default: 127.0.0.1
    data_type: string
    description: Address or hostname of the MongoDB server. Defaults to 127.0.0.1
  port:
    required: false
    default: 27017
    data_type: integer
    description: Port number of the MongoDB server. Defaults to 27017.
  authentication_database:
    required: conditional
    data_type: string
    description: >
      Database against which user will be authenticated against.
      Only valid with password based authentication.
  use_remote_estimate:
    required: false
    default: false
    data_type: boolean
    description: >
      Controls whether mongo_fdw uses exact rows from remote collection to obtain cost estimates.
  replica_set:
    required: false
    data_type: string
    description: >
      Replica set the server is member of.
      If set, driver will auto-connect to correct primary in the replica set when writing.
  read_preference:
    required: false
    default: primary
    data_type: string
    description: >
      primary, secondary, primaryPreferred, secondaryPreferred, or nearest.
  ssl:
    required: false
    default: false
    data_type: boolean
    description: Enable ssl. See http://mongoc.org/libmongoc/current/mongoc_ssl_opt_t.html to understand the options.
  pem_file:
    required: false
    data_type: string
    description: The .pem file that contains both the TLS/SSL certificate and key.
  pem_pwd:
    required: false
    data_type: string
    description: The password to decrypt the certificate key file(i.e. pem_file).
  ca_file:
    required: false
    data_type: string
    description: The .pem file that contains the root certificate chain from the Certificate Authority.
  ca_dir:
    required: false
    data_type: string
    description: The absolute path to the ca_file.
  crl_file:
    required: false
    data_type: string
    description: The .pem file that contains the Certificate Revocation List.
  weak_cert_validation:
    required: false
    default: false
    data_type: boolean
    description: >
      Enable the validation checks for TLS/SSL certificates and allows the use of invalid certificates to connect if set to true.
  enable_join_pushdown:
    required: false
    default: true
    data_type: boolean
    description: >
      If true, pushes the join between two foreign tables from the same foreign server, instead of fetching all the rows for both the tables and performing a join locally.
      This option can also be set for an individual table, and if any of the tables involved in the join has set it to false then the join will not be pushed down.
      The table-level value of the option takes precedence over the server-level option value.
  enable_aggregate_pushdown:
    required: false
    default: true
    data_type: boolean
    description: >
      If true, push aggregates to the remote MongoDB server instead of fetching all of the rows and aggregating them locally.
      This option can also be set for an individual table.
      The table-level value of the option takes precedence over the server-level option value.
  enable_order_by_pushdown:
    required: false
    default: true
    data_type: boolean
    description: >
      If true, pushes the ORDER BY clause to the foreign server instead of performing a sort locally.
      This option can also be set for an individual table, and if any of the tables involved in the query has set it to false then the ORDER BY will not be pushed down.
      The table-level value of the option takes precedence over the server-level option value.


user_mapping:
  username:
    required: false
    data_type: string
    description: Username to use when connecting to MongoDB.
  password:
    required: false
    data_type: string
    description: Password to authenticate to the MongoDB server.


create_foreign_table:
  database:
    required: false
    default: test
    data_type: string
    description: Name of the MongoDB database to query.
  collection:
    required: false
    data_type: string
    description: Name of the MongoDB collection to query. Defaults to the name of the foreign table.
  enable_join_pushdown:
    required: false
    default: true
    data_type: boolean
    description: Similar to the server-level option, but can be configured at table level as well.
  enable_aggregate_pushdown:
    required: false
    default: true
    data_type: boolean
    description: Similar to the server-level option, but can be configured at table level as well.
  enable_order_by_pushdown:
    required: false
    default: true
    data_type: boolean
    description: Similar to the server-level option, but can be configured at table level as well.
name: oracle_fdw
version: 2.6.0
source: https://github.com/laurenz/oracle_fdw


foreign_server:
  dbserver:
    required: true
    data_type: string
    description: >
      The Oracle database connection string for the remote database.
      This can be in any of the forms that Oracle supports as long as your Oracle client is configured accordingly.
      Set this to an empty string for local ("BEQUEATH") connections.
  isolation_level:
    required: false
    default: serializable
    data_type: string
    description: >
      The transaction isolation level to use at the Oracle database.
      The value can be "serializable", "read_committed" or "read_only".
  nchar:
    required: false
    default: off
    data_type: boolean
    description: >
      Setting this option to "on" chooses a more expensive character conversion on the Oracle side.
      This is required if you are using a single-byte Oracle database character set,
      but have NCHAR or NVARCHAR2 columns containing contain characters
      that cannot be represented in the database character set.
  set_timezone:
    required: false
    default: off
    data_type: boolean
    description: >
      Setting this option to "on" sets the Oracle session time zone to the current value
      of the PostgreSQL parameter timezone when the connection to Oracle is made.
      This is only useful if you plan to use Oracle columns of type TIMESTAMP WITH LOCAL TIME ZONE
      and want to translate them to timestamp without time zone in PostgreSQL.


user_mapping:
  user:
    required: true
    data_type: string
    description: >
      The Oracle user name for the session.
      Set this to an empty string for external authentication if you don't want to store Oracle credentials
      in the PostgreSQL database (one simple way is to use an external password store).
  password:
    required: true
    data_type: string
    description: The password for the Oracle user.


import_foreign_schema:
  case:
    required: false
    default: smart
    data_type: string
    description: >
      Controls case folding for table and column names during import.
      The possible values are:
      - keep: leave the names as they are in Oracle, usually in upper case.
      - lower: translate all table and column names to lower case.
      - smart: only translate names that are all upper case in Oracle (this is the default).
  collation:
    required: false
    default: default
    data_type: string
    description: >
      The collation used for case folding for the `lower` and `smart` options of case.
      The default value is `default` which is the database's default collation.
      Only collations in the pg_catalog schema are supported.
      See the collname values in the pg_collation catalog for a list of possible values.
  dblink:
    required: false
    data_type: string
    description: >
      The Oracle database link through which the schema is accessed.
      This name must be written exactly as it occurs in Oracle's system catalog, so normally consist of uppercase letters only.
  readonly:
    required: false
    default: false
    data_type: boolean
    description: >
      Sets the readonly option on all imported tables.
      INSERT, UPDATE and DELETE is only allowed on tables where this option is not set to yes/on/true.
  skip_tables:
    required: false
    default: false
    data_type: boolean
    description: Don't import tables.
  skip_views:
    required: false
    default: false
    data_type: boolean
    description: Don't import views.
  skip_matviews:
    required: false
    default: false
    data_type: boolean
    description: Don't import materialized views.
  max_long:
    required: false
    default: 32767
    data_type: integer
    description: >
      Sets the max_long option on all imported tables.

      The maximal length of any LONG, LONG RAW and XMLTYPE columns in the Oracle table.
      Possible values are integers between 1 and 1073741823 (the maximal size of a bytea in PostgreSQL).
      This amount of memory will be allocated at least twice, so large values will consume a lot of memory.
      If max_long is less than the length of the longest value retrieved, you will receive the error message ORA-01406: fetched column value was truncated.
  sample_percent:
    required: false
    default: 100
    data_type: integer
    description: >
      Sets the sample_percent option on all imported tables.

      This option only influences ANALYZE processing and can be useful to ANALYZE very large tables in a reasonable time.
      The value must be between 0.000001 and 100 and defines the percentage of Oracle table blocks that will be randomly selected to calculate PostgreSQL table statistics.
      This is accomplished using the SAMPLE BLOCK (x) clause in Oracle.
      ANALYZE will fail with ORA-00933 for tables defined with Oracle queries and may fail with ORA-01446 for tables defined with complex Oracle views.
  prefetch:
    required: false
    default: 50
    data_type: integer
    description: >
      Sets the prefetch option on all imported tables.

      Sets the number of rows that will be fetched with a single round-trip between PostgreSQL and Oracle during a foreign table scan.
      The value must be between 1 and 1000.
      Higher values can speed up performance, but will use more memory on the PostgreSQL server.
      Note that there is no prefetching if the Oracle table contains columns of the type MDSYS.SDO_GEOMETRY.
  lob_prefetch:
    required: false
    default: 1048576
    data_type: integer
    description: >
      Sets the lob_prefetch option on all imported tables.

      Sets the number of bytes that are prefetched for BLOB, CLOB and BFILE values.
      LOBs that exceed that size will require additional round trips between PostgreSQL and Oracle, so setting this value bigger than the size of your typical LOB will be good for performance.
      Choosing bigger values for this option can allocate more memory on the server side, but will boost performance for large LOBs.
  nchar:
    required: false
    default: off
    data_type: boolean
    description: >
      Sets the nchar option on all imported tables.

      Setting this option to "on" chooses a more expensive character conversion on the Oracle side.
      This is required if you are using a single-byte Oracle database character set,
      but have NCHAR or NVARCHAR2 columns containing contain characters
      that cannot be represented in the database character set.
  set_timezone:
    required: false
    default: off
    data_type: boolean
    description: >
      Sets the set_timezone option on all imported tables.

      Setting this option to "on" sets the Oracle session time zone to the current value
      of the PostgreSQL parameter timezone when the connection to Oracle is made.
      This is only useful if you plan to use Oracle columns of type TIMESTAMP WITH LOCAL TIME ZONE
      and want to translate them to timestamp without time zone in PostgreSQL.


create_foreign_table:
  table:
    required: true
    data_type: string
    description: >
      The Oracle table name.
      This name must be written exactly as it occurs in Oracle's system catalog, so normally consist of uppercase letters only.
  dblink:
    required: false
    data_type: string
    description: >
      The Oracle database link through which the table is accessed.
      This name must be written exactly as it occurs in Oracle's system catalog, so normally consist of uppercase letters only.
  schema:
    required: false
    data_type: string
    description: >
      The table's schema (or owner).
      Useful to access tables that do not belong to the connecting Oracle user.
      This name must be written exactly as it occurs in Oracle's system catalog, so normally consist of uppercase letters only.
  max_long:
    required: false
    default: 32767
    data_type: integer
    description: >
      The maximal length of any LONG, LONG RAW and XMLTYPE columns in the Oracle table.
      Possible values are integers between 1 and 1073741823 (the maximal size of a bytea in PostgreSQL).
      This amount of memory will be allocated at least twice, so large values will consume a lot of memory.
      If max_long is less than the length of the longest value retrieved, you will receive the error message ORA-01406: fetched column value was truncated.
  readonly:
    required: false
    default: false
    data_type: boolean
    description: >
      INSERT, UPDATE and DELETE is only allowed on tables where this option is not set to yes/on/true.
  sample_percent:
    required: false
    default: 100
    data_type: integer
    description: >
      This option only influences ANALYZE processing and can be useful to ANALYZE very large tables in a reasonable time.
      The value must be between 0.000001 and 100 and defines the percentage of Oracle table blocks that will be randomly selected to calculate PostgreSQL table statistics.
      This is accomplished using the SAMPLE BLOCK (x) clause in Oracle.
      ANALYZE will fail with ORA-00933 for tables defined with Oracle queries and may fail with ORA-01446 for tables defined with complex Oracle views.
  prefetch:
    required: false
    default: 50
    data_type: integer
    description: >
      Sets the number of rows that will be fetched with a single round-trip between PostgreSQL and Oracle during a foreign table scan.
      The value must be between 1 and 1000.
      Higher values can speed up performance, but will use more memory on the PostgreSQL server.
      Note that there is no prefetching if the Oracle table contains columns of the type MDSYS.SDO_GEOMETRY.
  lob_prefetch:
    required: false
    default: 1048576
    data_type: integer
    description: >
      Sets the number of bytes that are prefetched for BLOB, CLOB and BFILE values.
      LOBs that exceed that size will require additional round trips between PostgreSQL and Oracle, so setting this value bigger than the size of your typical LOB will be good for performance.
      Choosing bigger values for this option can allocate more memory on the server side, but will boost performance for large LOBs.
name: tds_fdw
version: master_2.0.3
source: https://github.com/tds-fdw/tds_fdw


foreign_server:
  servername:
    required: true
    default: 127.0.0.1
    data_type: string
    description: >
      The servername, address or hostname of the foreign server server.

      This can be a DSN, as specified in freetds.conf. See FreeTDS name lookup.

      You can set this option to a comma separated list of server names,
      then each server is tried until the first connection succeeds.
      This is useful for automatic fail-over to a secondary server.
  port:
    required: false
    data_type: integer
    description: >
      The port of the foreign server. This is optional.
      Instead of providing a port here, it can be specified in freetds.conf (if servername is a DSN).
  database:
    required: false
    data_type: string
    description: The database to connect to for this server.
  dbuse:
    required: false
    default: 0
    data_type: integer
    description: >
      This option tells tds_fdw to connect directly to database if dbuse is 0.
      If dbuse is not 0, tds_fdw will connect to the server's default database,
      and then select database by calling DB-Library's dbuse() function.

      For Azure, dbuse currently needs to be set to 0.
  language:
    required: false
    data_type: string
    description: >
      The language to use for messages and the locale to use for date formats.
      FreeTDS may default to us_english on most systems. You can probably also change this in freetds.conf.

      For information related to this for MS SQL Server, see SET LANGUAGE in MS SQL Server.
      For information related to Sybase ASE, see Sybase ASE login options and SET LANGUAGE in Sybase ASE.
  character_set:
    required: false
    data_type: string
    description: The client character set to use for the connection, if you need to set this for some reason.
  tds_version:
    required: false
    data_type: string
    description: >
      The version of the TDS protocol to use for this server.
      See Choosing a TDS protocol version and History of TDS Versions.
  msg_handler:
    required: false
    default: blackhole
    data_type: string
    description: >
      The function used for the TDS message handler.
      Options are "notice" and "blackhole."
      With the "notice" option, TDS messages are turned into PostgreSQL notices.
      With the "blackhole" option, TDS messages are ignored.
  fdw_startup_cost:
    required: false
    data_type: float
    description: A cost that is used to represent the overhead of using this FDW used in query planning.
  fdw_tuple_cost:
    required: false
    data_type: float
    description: A cost that is used to represent the overhead of fetching rows from this server used in query planning.
  sqlserver_ansi_mode:
    required: false
    default: false
    data_type: boolean
    description: >
      This option is supported for SQL Server only. The default is "false".
      Setting this to "true" will enable the following server-side settings after a successful connection to the foreign server:

      - CONCAT_NULLS_YIELDS_NULL ON
      - ANSI_NULLS ON
      - ANSI_WARNINGS ON
      - QUOTED_IDENTIFIER ON
      - ANSI_PADDING ON
      - ANSI_NULL_DFLT_ON ON

      Those parameters in summary are comparable to the SQL Server option ANSI_DEFAULTS.
      In contrast, sqlserver_ansi_mode currently does not activate the following options:

      - CURSOR_CLOSE_ON_COMMIT
      - IMPLICIT_TRANSACTIONS

      This follows the behavior of the native ODBC and OLEDB driver for SQL Servers,
      which explicitly turn them OFF if not configured otherwise.
  use_remote_estimate:
    required: false
    data_type: boolean
    description: >
      Whether to estimate the size of the table by performing some operation on the remote server (as defined by row_estimate_method),
      or whether to just use a local estimate, as defined by local_tuple_estimate.
  row_estimate_method:
    required: false
    data_type: string
    default: execute
    description: >
      The method used to estimate the number of rows in the query.
      Possible values are "execute" to execute the query on the remote server and get the actual number of rows,
      or "showplan_all" to get the estimated number of rows using MS SQL Server's SET SHOWPLAN_ALL.


user_mapping:
  username:
    required: true
    description: Username to use when connecting to the MSSQL/Sybase server.
  password:
    required: true
    description: Password to authenticate to the MSSQL/Sybase server.


import_foreign_schema:
  import_default:
    required: false
    default: false
    data_type: boolean
    description: Controls whether column DEFAULT expressions are included in the definitions of foreign tables.
  import_not_null:
    required: false
    default: true
    data_type: boolean
    description: Controls whether column NOT NULL constraints are included in the definitions of foreign tables.


create_foreign_table:
  query:
    required: conditional
    data_type: string
    description: >
      The query string to use to query the foreign table.
      Mutually exclusive with "table_name" option.
  schema_name:
    required: false
    data_type: string
    description: >
      The schema that the table is in.
      The schema name can also be included in table_name, so this is not required.
  table_name:
    required: conditional
    data_type: string
    description: >
      The table on the foreign server to query.
      Mutually exclusive with "query" option.
      Option alias: "table"
  match_column_names:
    required: false
    data_type: boolean
    description: >
      Whether to match local columns with remote columns by comparing their table names or whether to use the order that they appear in the result set.
  use_remote_estimate:
    required: false
    data_type: boolean
    description: >
      Whether to estimate the size of the table by performing some operation on the remote server (as defined by row_estimate_method),
      or whether to just use a local estimate, as defined by local_tuple_estimate.
  local_tuple_estimate:
    required: false
    data_type: integer
    description: >
      A locally set estimate of the number of tuples that is used when use_remote_estimate is disabled.
  row_estimate_method:
    required: false
    data_type: string
    default: execute
    description: >
      The method used to estimate the number of rows in the query.
      Possible values are "execute" to execute the query on the remote server and get the actual number of rows,
      or "showplan_all" to get the estimated number of rows using MS SQL Server's SET SHOWPLAN_ALL.


foreign_table_column:
  column_name:
    required: false
    data_type: string
    description: >
      The name of the column on the remote server.
      If this is not set, the column's remote name is assumed to be the same as the column's local name.
      If match_column_names is set to 0 for the table, then column names are not used at all, so this is ignored.
name: sqlite_fdw
version: 2.4.0
source: https://github.com/pgspider/sqlite_fdw


foreign_server:
  database:
    required: true
    data_type: string
    description: SQLite database path.
  updatable:
    required: false
    default: true
    data_type: boolean
    description: >
      This option allow or disallow write operations on SQLite database file.
  truncatable:
    required: false
    default: true
    data_type: boolean
    description: >
      Allows foreign tables to be truncated using the TRUNCATE command.
  keep_connections:
    required: false
    default: true
    data_type: boolean
    description: >
      Allows to keep connections to SQLite while there is no SQL operations between PostgreSQL and SQLite.
  batch_size:
    required: false
    default: 1
    data_type: integer
    description: >
      Specifies the number of rows which should be inserted in a single INSERT operation.
      This setting can be overridden for individual tables.


# Note: sqlite doesn't have schemas. to support common IMPORT FOREIGN SCHEMA syntax
# it's required to use any dummy value for a remote_schema parameter.
# IMPORT FOREIGN SCHEMA "some_schema" FROM SERVER sqlite_server INTO public;
# "some_schema" has no particular meaning for sqlite and can be set to an arbitrary value.
import_foreign_schema:
  import_default:
    required: false
    default: false
    data_type: boolean
    description: Allow borrowing default values from SQLite table DDL.
  import_not_null:
    required: false
    default: true
    data_type: boolean
    description: Allow borrowing NULL/NOT NULL constraints from SQLite table DDL.


create_foreign_table:
  table:
    required: false
    data_type: string
    description: >
      SQLite table name.
      Use if not equal to name of foreign table in PostgreSQL.
      Also see about identifier case handling.
  truncatable:
    required: false
    data_type: boolean
    description: >
      Default from the same CREATE SERVER option.
      See CREATE SERVER options section for details.
  batch_size:
    required: false
    data_type: integer
    description: >
      Default from the same CREATE SERVER option.
      See CREATE SERVER options section for details.
  updatable:
    required: false
    data_type: boolean
    description: >
      This option can allow or disallow write operations on a SQLite table independed of the same server option.


foreign_table_column:
  column_name:
    required: false
    data_type: string
    description: >
      This option gives the column name to use for the column on the remote server.
      Also see about identifier case handling.
  column_type:
    required: false
    data_type: string
    description: >
      Set preferred SQLite affinity for some PostgreSQL data types can be stored in different ways in SQLite (mixed affinity case).
      Updated and inserted values will have this affinity.
      Default preferred SQLite affinity for timestamp and uuid PostgreSQL data types is text.

      Use INT value for SQLite column (epoch Unix Time) to be treated/visualized as timestamp in PostgreSQL.

      Use BLOB value for SQLite column to be treated/visualized as uuid in PostgreSQL 14+.
  key:
    required: false
    data_type: boolean
    description: >
      Indicates a column as a part of primary key or unique key of SQLite table.
name: duckdb_fdw
version: 2.1.1
source: https://github.com/alitrack/duckdb_fdw


foreign_server:
  database:
    required: true
    data_type: string
    description: >
      DuckDB database path.
  truncatable:
    required: false
    default: false
    data_type: boolean
    description: >
      Allows foreign tables to be truncated using the TRUNCATE command.
  keep_connections:
    required: false
    default: false
    data_type: boolean
    description: >
      Allows to keep connections to DuckDB while there is no SQL operations between PostgreSQL and DuckDB.
  batch_size:
    required: false
    default: 1
    data_type: integer
    description: >
      Specifies the number of rows which should be inserted in a single INSERT operation.
      This setting can be overridden for individual tables.
  temp_directory:
    required: false
    default: NULL
    data_type: string
    description: >
      Specifies the directory to which to write temp files.


create_foreign_table:
  table:
    required: false
    data_type: string
    description: >
      DuckDB table name.
      Use if not equal to name of foreign table in PostgreSQL.
      Also see about identifier case handling.
  truncatable:
    required: false
    data_type: boolean
    description: >
      Default from the same CREATE SERVER option.
      See CREATE SERVER options section for details.
  batch_size:
    required: false
    data_type: integer
    description: >
      Default from the same CREATE SERVER option.
      See CREATE SERVER options section for details.


foreign_table_column:
  column_name:
    required: false
    data_type: string
    description: >
      This option gives the column name to use for the column on the remote server. 
      Also see about identifier case handling.
  column_type:
    required: false
    data_type: string
    description: >
      Option to convert INT DuckDB column (epoch Unix Time) to be treated/visualized as TIMESTAMP in PostgreSQL.
  key:
    required: false
    default: false
    data_type: boolean
    description: >
      Indicates a column as a part of primary key or unique key of DuckDB table.
name: file_fdw
version: 16.2.0
source: https://www.postgresql.org/docs/current/file-fdw.html


create_foreign_table:
  filename:
    required: conditional
    data_type: string
    description: >
      Specifies the file to be read.
      Relative paths are relative to the data directory.
      Either filename or program must be specified, but not both.
  program:
    required: conditional
    data_type: string
    description: >
      Specifies the command to be executed.
      The standard output of this command will be read as though COPY FROM PROGRAM were used.
      Either program or filename must be specified, but not both.
  format:
    required: false
    data_type: string
    description: >
      Specifies the data format, the same as COPY's FORMAT option.
  header:
    required: false
    data_type: boolean
    description: >
      Specifies whether the data has a header line, the same as COPY's HEADER option.
  delimiter:
    required: false
    data_type: string
    description: >
      Specifies the data delimiter character, the same as COPY's DELIMITER option.
  quote:
    required: false
    data_type: string
    description: >
      Specifies the data quote character, the same as COPY's QUOTE option.
  escape:
    required: false
    data_type: string
    description: >
      Specifies the data escape character, the same as COPY's ESCAPE option.
  "null":
    required: false
    data_type: string
    description: >
      Specifies the data null string, the same as COPY's NULL option.
  encoding:
    required: false
    data_type: string
    description: >
      Specifies the data encoding, the same as COPY's ENCODING option.


foreign_table_column:
  force_not_null:
    required: false
    data_type: boolean
    description: >
      If true, it specifies that values of the column should not be matched against the null string (that is, the table-level null option). 
      This has the same effect as listing the column in COPY's FORCE_NOT_NULL option.
  force_null:
    required: false
    data_type: boolean
    description: >
      If true, it specifies that values of the column which match the null string are returned as NULL even if the value is quoted. 
      Without this option, only unquoted values matching the null string are returned as NULL. 
      This has the same effect as listing the column in COPY's FORCE_NULL option.
name: redis_fdw
version: 16.2.0
source: https://github.com/pg-redis-fdw/redis_fdw


foreign_server:
  address:
    required: false
    default: 127.0.0.1
    data_type: string
    description: Address or hostname of the Redis server. Defaults to 127.0.0.1.
  port:
    required: false
    default: 6379
    data_type: integer
    description: Port number of the Redis server. Defaults to 6379.


user_mapping:
  password:
    required: false
    data_type: string
    description: >
      Password to be used if the server demands password authentication.
      By default, no password required.


create_foreign_table:
  database:
    required: false
    default: 0
    data_type: integer
    description: The numeric ID of the Redis database to query.
  tabletype:
    required: false
    default: none
    data_type: string
    description: >
      Can be 'hash', 'list', 'set' or 'zset'.
      Default: none, meaning only look at scalar values.
  tablekeyprefix:
    required: conditional
    default: none
    data_type: string
    description: >
      Only get items whose names start with the prefix.
      You can only have one of tablekeyset and tablekeyprefix, and if you use singleton_key you can't have either.
  tablekeyset:
    required: conditional
    default: none
    data_type: string
    description: >
      Fetch item names from the named set.
      You can only have one of tablekeyset and tablekeyprefix, and if you use singleton_key you can't have either.
  singleton_key:
    required: conditional
    default: none
    data_type: string
    description: >
      Get all the values in the table from a single named object.
      Default: none, meaning don't just use a single object.
      You can only have one of tablekeyset and tablekeyprefix, and if you use singleton_key you can't have either.

Example

Here is an example of a configuration file that defines a Postgres, Mysql and DuckDB datasources.

Example configuration file
postgres:
  hostname: localhost
  port: 5432
  database: postgres
  username: postgres
  password: postgres

servers:
  factory-db:
    description: Factory database
    fdw_name: postgres_fdw
    foreign_server:
      host: factory.mycompany.com
      port: 5432
      dbname: factory
      sslmode: require
      sslcertmode: disable
    user_mapping:
      user: postgres
      password: postgres

   financial_reporting:
     description: Financial regulatory reporting
     fdw_name: mysql_fdw
     foreign_server:
       host: reporting.mycompany.com
       port: 3306
     user_mapping:
       username: some_user
       password: password

  orders_data:
    description: Orders data dump from e-commerse
    fdw_name: duckdb_fdw
    foreign_server:
      database: /home/data/json_files.duckdb

Datero backend will connect to the local Postgres database with mentioned settings in postgres section. Postgres datasource factory-db is used to connect to the factory database. In the financial_reporting datasource, the mysql_fdw extension is used to connect to the financial regulatory reporting database. The orders_data datasource uses the duckdb_fdw extension to connect to the orders data dump from e-commerce stored in json files on a local storage (must be mounted into the Datero container as volume).