Where communities thrive

  • Join over 1.5M+ people
  • Join over 100K+ communities
  • Free without limits
  • Create your own community
Repo info
  • 06:57
    uranusjr labeled #28023
  • 06:57
    uranusjr labeled #28023
  • 06:31
    uranusjr edited #28026
  • 06:31
    uranusjr edited #28026
  • 06:30
    uranusjr edited #28026
  • 06:30
    uranusjr edited #28026
  • 06:27
    boring-cyborg[bot] commented #28026
  • 06:27
    HwiLu labeled #28026
  • 06:27
    HwiLu labeled #28026
  • 06:27
    HwiLu opened #28026
  • 06:24
    hectorhe001 starred apache/airflow
  • 06:18
    dstandish edited #27845
  • 05:35
    BobDu synchronize #27917
  • 04:55
    uranusjr synchronize #28007
  • 04:55
    wybert starred apache/airflow
  • 03:48
    eladkal commented #27633
  • 03:47
    eladkal commented #27633
  • 03:46
    zhaojc starred apache/airflow
Thuc Nguyen Canh
any expert can help, I will make a call, very stressed today with this issue
Thuc Nguyen Canh
[2021-08-17 07:07:15,579] {jobs.py:1109} INFO - Tasks up for execution:
        <TaskInstance: Dashboard_C2C.Dashboard_C2C 2021-08-17 06:40:14.195012+00:00 [scheduled]>
[2021-08-17 07:07:15,583] {jobs.py:1144} INFO - Figuring out tasks to run in Pool(name=None) with 128 open slots and 1 task instances in queue
[2021-08-17 07:07:15,586] {jobs.py:1180} INFO - DAG Dashboard_C2C has 0/2 running and queued tasks
[2021-08-17 07:07:15,586] {jobs.py:1218} INFO - Setting the follow tasks to queued state:
        <TaskInstance: Dashboard_C2C.Dashboard_C2C 2021-08-17 06:40:14.195012+00:00 [scheduled]>
[2021-08-17 07:07:15,597] {jobs.py:1301} INFO - Setting the follow tasks to queued state:
        <TaskInstance: Dashboard_C2C.Dashboard_C2C 2021-08-17 06:40:14.195012+00:00 [queued]>
[2021-08-17 07:07:15,597] {jobs.py:1343} INFO - Sending ('Dashboard_C2C', 'Dashboard_C2C', datetime.datetime(2021, 8, 17, 6, 40, 14, 195012, tzinfo=<TimezoneInfo [UTC, GMT, +00:00:00, STD]>), 1) to executor with priority 1 and queue default
[2021-08-17 07:07:15,598] {base_executor.py:56} INFO - Adding to queue: airflow run Dashboard_C2C Dashboard_C2C 2021-08-17T06:40:14.195012+00:00 --local -sd /root/airflow-dags/dags/Dashboard_C2C.py
Process QueuedLocalWorker-2:
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
  File "/usr/local/lib/python3.6/dist-packages/airflow/executors/local_executor.py", line 113, in run
    key, command = self.task_queue.get()
  File "/usr/lib/python3.6/multiprocessing/queues.py", line 113, in get
    return _ForkingPickler.loads(res)
TypeError: __init__() missing 5 required positional arguments: 'tz', 'utc_offset', 'is_dst', 'dst', and 'abbrev'
currenly I got above issue with LocalExecutor
Thuc Nguyen Canh
I can make it work with 'airflow test' but failed with 'airflow run'
Thuc Nguyen Canh
hi, I fixed it by reinstalling airflow, so maybe there is library conflict.
Rohith Madamshetty
can anyone says how the kubernetes pod operator gets created with all the specs when we use pod operator in the airflow. I mean in the pod operator we specify arguments and kubernetes secrets and resources and dag parameters, but can anyone say how do they get created as a pod in the airflow cluster becuase i see that there are some extra env variables added to my pod and i can see them in pod spec file in the cluster
Avinash Pallerlamudi
Hello Everyone,
How do I call a big query stored proc using Airflow in a Task, Is there an operator?
Fran Sánchez
Do you know when 2.2.1 is expected?
hello I am getting error on airflow
import pwd
ModuleNotFoundError: No module named 'pwd'
Hi i am looking for a precise answer. Can we create a DAG dynamically using Airflow API ?

Hello All,
Looking for some help regarding the issue I'm facing after migrating from Airflow 1.10.11 to 2.2.3. I'm unable to execute the DAGs from UI as the task is going into queued state. I created the pod_template_file as suggested in the migration steps but still getting the below error:
"airflow.exceptions.AirflowException: Dag 'xxxxxxx' could not be found; either it does not exist or it failed to parse"

But I could see the DAGs inside the webserver and scheduler in the exact location where the task is trying to read from which is "/opt/airflow/dags/repo/". Surprisingly, I'm able to trigger the DAG from webserver's command line and the DAG is completing successfully. Any help please?

Hello everyone, I have my access control organized, I would like to disable the login/ login page. According to the documentation AUTH_ROLE_PUBLIC = 'Admin' should help. But the problem is not solved ... Has anyone encountered a similar one?
Vigneshwar Subramanian

Hi All, Looking for some help in troubleshooting my airflow setup
I have setup the airflow in in Azure Cloud (Azure Container Apps) and attached an Azure File Share as an external mount/volume

  1. I ran airflow init service, it had created the airflow.cfg and 'webserver_config.py' file in the AIRFLOW_HOME (/opt/airflow), which is actually an azure mounted file system
  2. I ran airflow webserver service, it had created the airflow-webserver.pid file in the AIRFLOW_HOME (/opt/airflow), which is actually an azure mounted file system
  3. Now the problem is all the files created above are created with root user&groups not as airflow user(50000),
    I have also set the env variable AIRFLOW_UID to 50000 during the creation of container app. due to this my webservers are not starting, throwing below error
    PermissionError: [Errno 1] Operation not permitted: '/opt/airflow/airflow-webserver.pid'

Attached screenshot for reference

Your help is much appreciated!
Josué Pradas
Hello everyone, does anyone know why this error "cannot pickle 'LockType' object" usually happens when clearing a task on Airflow?
deepan ramachandran
i have issue with ModuleNotFoundError: No module named 'pwd'
I tried to install without docker and wsl2, Any help?
Rishi Kaushal
getting following error when airflow webserver goes down : ERROR - No response from gunicorn master within 120 seconds
above error is got in webserver.log file
can anyone tell what could be the reason for getting above error & how to fix it ?
Hello people, but how can I display the connections that we take from the vault on the web?
Prakash Chandrasekaran
We wrote a simple python program that connects to postgres db in AWS RDS and executes a SELECT query. However it fails with connection time out error. We are able to connect to db from the docker container in which the airflow runs, though. Any help is appreciated.
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: Connection timed out
Is the server running on host “hostname.domain.com” (99.999.999.99) and accepting
TCP/IP connections on port 5432?
Prakash Chandrasekaran
@EthanBeauvais were you able to resolve this error?
(psycopg2.OperationalError) could not connect to server: Connection timed out Is the server running on host “dbinfo” (IP address) and accepting TCP/IP connections on port 5432?
Hello Everyone I have a question about apache airflow 1.10.10. , I found some as "airflow.exceptions.AirflowException: Celery command failed " can help me ?
This is very strange, the upstream project has succeeded, but the downstream task is not running, prompting upstream_failed.