failed to flush chunk

Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zuMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [task] created task=0x7ff2f183a840 id=13 OK Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Apr 15, 2021 at 17:18. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Logstash_Format On [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Once set the username/password using hardcode. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Feel free to reopen if you think this is something we should be able to fix on the operator side. [2020/08/12 18:26:45] [ info] [sp] stream processor started [2020/08/12 18:26:46] [ warn] [engine] failed to flush chunk '1-1597256805.831884794.flb', retry in 8 seconds: task_id=3, input=tail.0 > output=es.0 [2020/08/12 18:26:46] [ warn] [engine] failed to flush chunk '1-1597256805.900629707.flb', retry in 6 seconds: task_id=5, input=tail.0 . Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. As you can see, there is nothing special except failed to flush chunk and chunk cannot be retried. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [OUTPUT] Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Edit: If you're worried about something happening at 13:52:12 on 08/24, It's high probability is nothing special. I am trying to send logs of my apps running on an ECS Fargate Cluster to Elastic Cloud. Under 200 tps everything is working . "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=8 assigned to thread #0 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=35359369 file has been deleted: /var/log/containers/hello-world-swxx6_argo_wait-dc29bc4a400f91f349d4efd144f2a57728ea02b3c2cd527fcd268e3147e9af7d.log [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e38680 id=1 OK [2022/03/24 04:20:06] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 60 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=650 We are trying to get EKS logs to Graylog. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=656 Please . Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=20 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] task_id=11 assigned to thread #1 Expected behavior A clear and concise description of what you expected to happen. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 11 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) {"took":2414,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"juMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fluentd will wait to flush the buffered chunks for delayed events. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] new retry created for task_id=9 attempts=1 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=14 attempts=2 a retry for websocket plugin has been triggered, with another handshake and data flush. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Y-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:49] [debug] [outputes.0] HTTP Status=200 URI=/_bulk For example, the figure below shows when the chunks (timekey: 3600) will be flushed actually, for sample timekey_wait values: "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"4OMmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. hi @lecaros I think this was only [warn] message, I checked with es, I can search the right apps logs. [2022/03/24 04:19:22] [debug] [out coro] cb_destroy coro_id=1 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] inode=1756313 removing file name /var/log/containers/hello-world-7mwzw_argo_wait-970c00b906c36cb89ed77fe3fa3cd1abc2702078fee737da0062d3b25680bf9c.log Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192128.185362391.flb', retry in 10 seconds: task_id=18, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input chunk] update output instances with new chunk size diff=633 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [http_client] not using http_proxy for header Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] task_id=6 assigned to thread #0 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=103386716 watch_fd=5 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] new retry created for task_id=13 attempts=1 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] re-using retry for task_id=16 attempts=2 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [task] created task=0x7ff2f183b560 id=20 OK Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] task_id=17 assigned to thread #0 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:24] [debug] [retry] new retry created for task_id=0 attempts=1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-g74nr_argo_wait-227a0fdb4663e03fecebe61f7b6bfb6fdd2867292cacfe692dc15d50a73f29ff.log, inode 1885001 The text was updated successfully, but these errors were encountered: Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69336502 removing file name /var/log/containers/hello-world-bjfnf_argo_wait-8f0faa126a1c36d4e0d76e1dc75485a39ecc2d43a4efc46ae7306f4b86ea9964.log [OUTPUT] Here is screenshot from DataGrip: [2022/03/24 04:21:20] [debug] [input:tail:tail.0] 4 new files found on path '/var/log/containers/.log' fluent-bit-1.6.10 Log loss failed to flush chunk. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] new retry created for task_id=20 attempts=1 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [http_client] not using http_proxy for header My fluentbit (td-agent-bit) fails to flush chunks: [engine] failed to flush chunk '3743-1581410162.822679017.flb', retry in 617 seconds: task_id=56, input=systemd.1 > output=es.0.This is the only log entry that shows up. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NeMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:54] [debug] [retry] re-using retry for task_id=0 attempts=3 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 [2022/03/24 04:19:38] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 9 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=661 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0 [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"fOMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. What this does is add a service worker that can cache all your chunks. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"eeMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. The number of log records that this output instance has successfully sent. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [retry] new retry created for task_id=6 attempts=1 Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY I'm using fluentd in my kubernetes cluster to collect logs from the pods and send them to the elasticseach. By following the example from the documentation and tweaking it slightly (newer schema version, different names, dropping fields with default values) I've succeeded to do the former - Loki creates keyspace and the table for the Loki indexes. [2022/03/24 04:20:34] [debug] [outputes.0] task_id=0 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=21 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"IOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [out coro] cb_destroy coro_id=9 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 19 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [task] created task=0x7ff2f1839ee0 id=8 OK [2022/03/24 04:20:49] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:00] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available This error happened for 1.8.12/1.8.15/1.9.0. Name es [2022/03/24 04:19:21] [debug] [outputes.0] task_id=2 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"leMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. . Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 11 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) I only changed the output config since its a subchart. Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [debug] [outputes.0] task_id=2 assigned to thread #0 I am getting these errors. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [outputes.0] task_id=4 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048905 removing file name /var/log/containers/hello-world-ctlp5_argo_wait-f817c7cb9f30a0ba99fb3976757b495771f6d8f23e1ae5474ef191a309db70fc.log [2022/03/24 04:19:29] [debug] [http_client] not using http_proxy for header [2022/03/24 04:19:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"zeMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:47 pm | [2022/03/25 07:08:47] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:38] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=650 {"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [ warn] [engine] failed to flush chunk '1-1648192103.858183.flb', retry in 30 seconds: task_id=5, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 is now available Host {{ .Release.Name }}-elasticsearch-master, sassoftware/viya4-monitoring-kubernetes#431. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:49] [ warn] [engine] failed to flush chunk '1-1648095560.254537600.flb', retry in 15 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [SERVICE] Flush 5 Daemon off [INPUT] Name cpu Tag fluent_bit [OUTPUT] Name forward Match * Host fd00:7fff:0:2:9c43:9bff:fe00:bb Port 24000. Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Please refer this: [2022/03/18 11:23:17] [ warn] [engine] failed to flush chunk '1-1647602596.725620402.flb', retry in 9 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) [2022/03/18 11:23:17] [error] [output:es:es.0] HTTP status=404 URI=/_bulk, response: {"error":"404 page . Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [retry] new retry created for task_id=14 attempts=1 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input chunk] update output instances with new chunk size diff=1083 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [task] created task=0x7ff2f183a2a0 id=10 OK [2022/03/24 04:21:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=697 [2022/03/24 04:20:36] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/22 03:57:49] [ warn] [engine] failed to flush chunk '1-1647920934.181870214.flb', retry in 786 seconds: task_id=739, input=tail.0 > output=es.0 (out_id=0), use helm to install helm-charts-fluent-bit-0.19.19. [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log, inode 35353617 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available [2022/03/24 04:19:49] [debug] [retry] re-using retry for task_id=1 attempts=3 Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Match kube. Fluentd collects log data in a single blob called a chunk.When Fluentd creates a chunk, the chunk is considered to be in the stage, where the chunk gets filled with data.When the chunk is full, Fluentd moves the chunk to the queue, where chunks are held before being flushed, or written out to their destination.Fluentd can fail to flush a chunk for a number of reasons, such as network issues or . caller=flush.go:198 org_id=fake msg="failed to flush user" err=timeout. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"NuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:36 pm | [2022/03/25 07:08:36] [ warn] [engine] failed to flush chunk '1-1648192099.641327100.flb', retry in 11 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:20:06] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Replace_Dots On. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=694 [2022/03/24 04:19:24] [debug] [out coro] cb_destroy coro_id=0 [2022/03/24 04:20:49] [debug] [outputes.0] task_id=0 assigned to thread #1 [2022/03/24 04:19:38] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] purge: monitored file has been deleted: /var/log/containers/hello-world-6lqzf_argo_main-5f73e32f330b82717357220ce404309cd9c3f62e1d75f241f74cbc3086597fa4.log Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=7 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"JeMoun8BI6SaBP9lIP7t","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. stop td-agent service. There is another scenario, once websocket server flaps in a short time . Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=16 assigned to thread #0 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [outputes.0] HTTP Status=200 URI=/_bulk with the updated value.yaml file. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available [2022/03/24 04:19:24] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) [engine] failed to flush chunk '1-1612396545.855856569.flb', retry in 1485 seconds: task_id=143, input=forward.0 > output=tcp.0 but sometimes this is the last thing I see of the chunk. [2022/03/24 04:19:30] [debug] [retry] re-using retry for task_id=2 attempts=2 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] task_id=15 assigned to thread #0 I use 2.0.6,no matter set Type _doc or Replace_Dots On,i still see mass warn log above. What versions are you using? [2022/03/24 04:19:50] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Name es Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [SERVICE] Flush 1 Daemon off Log_level info Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020 [INPUT] Name forward Listen 0.0.0.0 Port 24224 [INPUT] name cpu tag metrics_cpu [INPUT] name disk tag metrics_disk [INPUT] name mem tag metrics_memory [INPUT] name netif tag metrics_netif interface eth0 [FILTER] Name parser . Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=69464185 file has been deleted: /var/log/containers/hello-world-ctlp5_argo_main-276b9a264b409e931e48ca768d7a3f304b89c6673be86a8cc1e957538e9dd7ce.log Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [outputes.0] task_id=9 assigned to thread #0 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] task_id=8 assigned to thread #0 [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:21] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:20:36] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input chunk] update output instances with new chunk size diff=695 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [ warn] [engine] failed to flush chunk '1-1648192113.5409018.flb', retry in 7 seconds: task_id=11, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 is now available Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [out coro] cb_destroy coro_id=3 Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [outputes.0] HTTP Status=200 URI=/_bulk

Shadwell, London Crime, Articles F

Kategorien

failed to flush chunk

failed to flush chunk

Sie wollen, dass wir Ihnen automatisch unseren aktuellen Blogartikel zusenden? Dann melden Sie sich hier zu unseren Newsletter an.

Hat Ihnen dieser Beitrag gefallen? Dann teilen Sie ihn mit Ihren Bekannten.
ACHTUNG!

Dieser Beitrag ist keine Rechtsberatung! Ich bin zertifizierter Datenschutzbeauftragter aber kein Rechtsanwalt. Von daher kann ich und darf ich keine anwaltlichen Tipps geben und auch keinerlei keinerlei Haftung übernehmen.

failed to flush chunk

Bitte bestätigen Sie Ihre Anmeldung über einen Link den wir Ihnen per Email zugesendet haben. Falls Sie keine E-mail erhalten haben, überprüfen Sie auch den Spam folder.