Http 502 status after scale up

Hi, i am using Traefik with ECS provider as target. When a new target instance starts, and traefik receives the notification from ECS by the periodic polling, it starts sending requests to it instantly, before it is ready, thus causing 502 errors for a short time, about 39 seconds. I have read about similar issues, where the solution was some docker swarm property, which is not relevant for ECS. I have tried to configure retry mw, but initialinterval field was not recognized, I think because that is for Traefik v1. Without that, the retry happens, but the errors still come. Is the retry intelligent enough to send to a different node? Should I configure health check between Traefik and it's target? The issue also comes during downscale, but that is different, and that can be because ECS polling did not realize the stopping node yet I guess. Right now this is a show stopper for me, any help is appreciated.

Configuring healthcheck on the target task definitions solved the upscale 502 problem. However, 502 still happens during downscale. I understand why, as the polling is 15 seconds (and I also use 15 sec as healthcheck period), Traefik will forward requests to a stopped node until the next ECS poll, then it removes it from the targets properly. So how could I workaround this? Is retry sending to the other available target task? I have configured retry, and the issue still comes. Maybe I configured it wrong. I.e. retry interval is impossible to configure right now based on the documentation or anything I could find on the net. Another solution could be to decrease healthcheck interval to be very often, i.e. 3 sec. Then the retry would send to the good target, 3 sec delay is acceptable. Any suggestions?

It seems that there is no solution, the label based routing in Traefic is not mature enough for real use. Because the ECS integration is based on polling, if a target node stops, Traefik will continue routing to that node until the next poll. As scale-up/scale-down is an everyday activity based on the load of the ECS cluster, this means lost requests every day, which is not acceptable. So now I am moving to file provider for dynamic config