![]() ![]() # should be supplied in the format: key = value # the worker pods will be scheduled to the nodes of the specified key-value pairs. # the key-value pairs to be given to worker pods. Worker_container_image_pull_policy = IfNotPresent Plugins_folder = /usr/local/airflow/pluginsĬhild_process_log_directory = /usr/local/airflow/logs/scheduler Here is my airflow.cfg file, apiVersion: v1īase_log_folder = /usr/local/airflow/logs The scheduler yaml file is exactly the same except the container args is args. MountPath: /usr/local/airflow/airflow.cfg Here is my webserver and scheduler deployment files apiVersion: v1 However when I execute a DAG from UI, the scheduler fires a new POD for that tasks BUT the new worker pod fails to run saying I can see newly added DAGS in webserver UI aswell. I've placed my dag files in NFS server /var/nfs/airflow/development/. Airflow webserver and scheduler are using a PVC airflow-pvc which is bound with airflow-pv. I've a PV airflow-pv which is linked with NFS server. Following the project from here, I am trying to integrate airflow kubernetes executor using NFS server as backed storage PV.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |