global:
workloads:
resources:
mainContainer:
cpu:
request: '500m'
limit: '4000m'
For us, this variable became so large that the source/destination containers of the replication job for our connectors using the MySQL and MSSQL sources would fail to start with the error: exec /usr/bin/sh: argument list too long
DATA_CHANNEL_SOCKET_PATHS: /var/run/sockets/airbyte_socket_1.sock,/var/run/sockets/airbyte_socket_2.sock,/var/run/sockets/airbyte_socket_3.sock,/var/run/sockets/airbyte_socket_4.sock,/var/run/sockets/airbyte_socket_5.sock,/var/run/sockets/airbyte_socket_6.sock,/var/run/sockets/airbyte_socket_7.sock,/var/run/sockets/airbyte_socket_8.sock,/var/run/sockets/airbyte_socket_9.sock,/var/run/sockets/airbyte_socket_10.sock,/var/run/sockets/airbyte_socket_11.sock,/var/run/sockets/airbyte_socket_12.sock,/var/run/sockets/airbyte_socket_13.sock,/var/run/sockets/airbyte_socket_14.sock,/var/run/sockets/airbyte_socket_15.sock,/var/run/sockets/airbyte_socket_16.sock,/var/run/sockets/airbyte_socket_17.sock,/var/run/sockets/airbyte_socket_18.sock,/var/run/sockets/airbyte_socket_19.sock,/var/run/sockets/airbyte_socket_20.sock,/var/run/sockets/airbyte_socket_21.sock,/var/run/sockets/airbyte_socket_22.sock,/var/run/sockets/airbyte_socket_23.sock,/var/run/sockets/airbyte_socket_24.sock,/var/run/sockets/airbyte_socket_25.sock,/var/run/sockets/airbyte_socket_26.sock,/var/run/sockets/airbyte_socket_27.sock,/var/run/sockets/airbyte_socket_28.sock,/var/run/sockets/airbyte_socket_29.sock,/var/run/sockets/airbyte_socket_30.sock,/var/run/sockets/airbyte_socket_31.sock,/var/run/sockets/airbyte_socket_32.sock,/var/run/sockets/airbyte_socket_33.sock,/var/run/sockets/airbyte_socket_34.sock,/var/run/sockets/airbyte_socket_35.sock,/var/run/sockets/airbyte_socket_36.sock,/var/run/sockets/airbyte_socket_37.sock,/var/run/sockets/airbyte_socket_38.sock,/var/run/sockets/airbyte_socket_39.sock,/var/run/sockets/airbyte_socket_40.sock,/var/run/sockets/airbyte_socket_41.sock,/var/run/sockets/airbyte_socket_42.sock,/var/run/sockets/airbyte_socket_43.sock,/var/run/sockets/airbyte_socket_44.sock,/var/run/sockets/airbyte_socket_45.sock,/var/run/sockets/airbyte_socket_46.sock,/var/run/sockets/airbyte_socket_47.sock,/var/run/sockets/airbyte_socket_48.sock,/var/run/sockets/airbyte_socket_49.sock,/var/run/sockets/airbyte_socket_50.sock,/var/run/sockets/airbyte_socket_51.sock,/var/run/sockets/airbyte_socket_52.sock,/var/run/sockets/airbyte_socket_53.sock,/var/run/sockets/airbyte_socket_54.sock,/var/run/sockets/airbyte_socket_55.sock,/var/run/sockets/airbyte_socket_56.sock,/var/run/sockets/airbyte_socket_57.sock,/var/run/sockets/airbyte_socket_58.sock,/var/run/sockets/airbyte_socket_59.sock,/var/run/sockets/airbyte_socket_60.sock,/var/run/sockets/airbyte_socket_61.sock,/var/run/sockets/airbyte_socket_62.sock,/var/run/sockets/airbyte_socket_63.sock,/var/run/sockets/airbyte_socket_64.sock,/var/run/sockets/airbyte_socket_65.sock,/var/run/sockets/airbyte_socket_66.sock,/var/run/sockets/airbyte_socket_67.sock,/var/run/sockets/airbyte_socket_68.sock,/var/run/sockets/airbyte_socket_69.sock,/var/run/sockets/airbyte_socket_70.sock,/var/run/sockets/airbyte_socket_71.sock,/var/run/sockets/airbyte_socket_72.sock,/var/run/sockets/airbyte_socket_73.sock,/var/run/sockets/airbyte_socket_74.sock,/var/run/sockets/airbyte_socket_75.sock,/var/run/sockets/airbyte_socket_76.sock,/var/run/sockets/airbyte_socket_77.sock,/var/run/sockets/airbyte_socket_78.sock,/var/run/sockets/airbyte_socket_79.sock,/var/run/sockets/airbyte_socket_80.sock,/var/run/sockets/airbyte_socket_81.sock,/var/run/sockets/airbyte_socket_82.sock,/var/run/sockets/airbyte_socket_83.sock,/var/run/sockets/airbyte_socket_84.sock,/var/run/sockets/airbyte_socket_85.sock,/var/run/sockets/airbyte_socket_86.sock,/var/run/sockets/airbyte_socket_87.sock,/var/run/sockets/airbyte_socket_88.sock,/var/run/
This caused the Airbyte sync to report as successful, but 0 bytes were loaded each time:

After we realized what was happening, we switched the main container CPU request to use full units, and the DATA_CHANNEL_SOCKET_PATHS variable looked normal and the source/destination containers in the replication job started successfully:
DATA_CHANNEL_SOCKET_PATHS: /var/run/sockets/airbyte_socket_1.sock,/var/run/sockets/airbyte_socket_2.sock,/var/run/sockets/airbyte_socket_3.sock,/var/run/sockets/airbyte_socket_4.sock
Helm Chart Version
2.0.19
What step the error happened?
During the Sync
Relevant information
When setting the
JOB_MAIN_CONTAINER_CPU_LIMITvariable (or possibly theJOB_MAIN_CONTAINER_CPU_REQUESTvariable) using millicores instead of a full unit, theDATA_CHANNEL_SOCKET_PATHSvariable will have tons of sockets listed. We set these variables using these helm chart values, but I believe this issue would happen regardless of how the main container CPU is set:For us, this variable became so large that the source/destination containers of the replication job for our connectors using the MySQL and MSSQL sources would fail to start with the error:
exec /usr/bin/sh: argument list too longWhen inspecting the
DATA_CHANNEL_SOCKET_PATHSvariable in the running replication job containers, we found it was huge:This caused the Airbyte sync to report as successful, but 0 bytes were loaded each time:

After we realized what was happening, we switched the main container CPU request to use full units, and the
DATA_CHANNEL_SOCKET_PATHSvariable looked normal and the source/destination containers in the replication job started successfully:Relevant log output
exec /usr/bin/sh: argument list too longInternal Tracking: https://github.com/airbytehq/oncall/issues/11120