调试istio速率限制处理程序


9

我正在尝试对某些内部服务(在网状内部)应用速率限制。

我使用了文档中的示例,并生成了redis速率限制配置,其中包括一个(redis)处理程序,配额实例,配额规范,配额规范绑定以及应用该处理程序的规则。

这个redis处理程序:

apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
  name: redishandler
  namespace: istio-system
spec:
  compiledAdapter: redisquota
  params:
    redisServerUrl: <REDIS>:6379
    connectionPoolSize: 10
    quotas:
    - name: requestcountquota.instance.istio-system
      maxAmount: 10
      validDuration: 100s
      rateLimitAlgorithm: FIXED_WINDOW
      overrides:
      - dimensions:
          destination: s1
        maxAmount: 1
      - dimensions:
          destination: s3
        maxAmount: 1
      - dimensions:
          destination: s2
        maxAmount: 1

配额实例(目前我仅对按目的地限制感兴趣):

apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
  name: requestcountquota
  namespace: istio-system
spec:
  compiledTemplate: quota
  params:
    dimensions:
      destination: destination.labels["app"] | destination.service.host | "unknown"

配额规范,如果我理解正确,则每个请求收费1:

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
  name: request-count
  namespace: istio-system
spec:
  rules:
  - quotas:
    - charge: 1
      quota: requestcountquota

所有参与服务都预提取的配额绑定规范。我也尝试过service: "*",也什么也没做。

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
  name: request-count
  namespace: istio-system
spec:
  quotaSpecs:
  - name: request-count
    namespace: istio-system
  services:
  - name: s2
    namespace: default
  - name: s3
    namespace: default
  - name: s1
    namespace: default
    # - service: '*'  # Uncomment this to bind *all* services to request-count

应用处理程序的规则。当前在所有场合(尝试过比赛但也没有改变任何东西):

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  name: quota
  namespace: istio-system
spec:
  actions:
  - handler: redishandler
    instances:
    - requestcountquota

所有参与者的VirtualService定义都非常相似:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: s1
spec:
  hosts:
  - s1

  http:
  - route:
    - destination:
        host: s1

问题是什么也没有发生,也没有速率限制。我curl从网格内部的Pod中进行了测试。Redis实例为空(db 0上没有键,我假设这是速率限制将使用的键),因此我知道它几乎不能对任何速率进行限制。

该处理程序似乎已正确配置(如何确定?),因为在混合器(策略)中报告了其中的一些错误。仍然存在一些错误,但没有一个与该问题或配置相关联。提到redis处理程序的唯一一行是这样的:

2019-12-17T13:44:22.958041Z info    adapters    adapter closed all scheduled daemons and workers    {"adapter": "redishandler.istio-system"}   

但尚不清楚是否存在问题。我认为不是。

这些是我部署后重新加载的其余几行:

2019-12-17T13:44:22.601644Z info    Built new config.Snapshot: id='43'
2019-12-17T13:44:22.601866Z info    adapters    getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.601881Z warn    Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-12-17T13:44:22.602718Z info    adapters    Waiting for kubernetes cache sync...    {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903844Z info    adapters    Cache sync successful.  {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903878Z info    adapters    getting kubeconfig from: "" {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.903882Z warn    Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-12-17T13:44:22.904808Z info    Setting up event handlers
2019-12-17T13:44:22.904939Z info    Starting Secrets controller
2019-12-17T13:44:22.904991Z info    Waiting for informer caches to sync
2019-12-17T13:44:22.957893Z info    Cleaning up handler table, with config ID:42
2019-12-17T13:44:22.957924Z info    adapters    deleted remote controller   {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.957999Z info    adapters    adapter closed all scheduled daemons and workers    {"adapter": "prometheus.istio-system"}
2019-12-17T13:44:22.958041Z info    adapters    adapter closed all scheduled daemons and workers    {"adapter": "redishandler.istio-system"}   
2019-12-17T13:44:22.958065Z info    adapters    shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958050Z info    adapters    shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958096Z info    adapters    shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:22.958182Z info    adapters    shutting down daemon... {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:44:23.958109Z info    adapters    adapter closed all scheduled daemons and workers    {"adapter": "kubernetesenv.istio-system"}
2019-12-17T13:55:21.042131Z info    transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-17T14:14:00.265722Z info    transport: loopyWriter.run returning. connection error: desc = "transport is closing"

我正在使用demo配置文件disablePolicyChecks: false启用速率限制。这是在EKS上部署的istio 1.4.0上。

我还尝试了低限制的memquota(这是我们的暂存环境),似乎没有任何作用。无论我超出配置的速率限制多少,我都不会得到429。

我不知道如何调试它,看看配置错误导致它什么都不做。

任何帮助表示赞赏。


+1,我也无法在普通kubeadm干净群集上使用1.4.2和memquota进行任何操作。我花了大量时间调试无济于事。我也想在这里看到一些答案。我开始赏金。
gertvdijk

我已经把最大的赏金了。它过期了。
Reut Sharabani

Answers:


2

我也花了数小时试图解密文档并获得示例工作。

根据文档,他们建议我们启用策略检查:

https://istio.io/docs/tasks/policy-enforcement/rate-limiting/

但是,如果那行不通,我进行了“ istioctl配置文件转储”,搜索了策略,然后尝试了几种设置。

我使用Helm安装并通过了以下操作,然后能够获得所描述的行为:

--set global.disablePolicyChecks = false \ --set values.pilot.policy.enabled = true \ ===>这可以正常工作,但是不在文档中。


1
谢谢!这太旧了,我们放弃了istio(部分是因为这个原因)。我会给你赏金,因为它指出了一些为什么它不起作用的线索:github.com/istio/istio/issues/19139
Reut Sharabani
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.
Licensed under cc by-sa 3.0 with attribution required.