程序印象

使用 client-go 开发 controller

2018/03/28 Share

使用 client-go 开发 controller

client-go 介绍

github地址:https://github.com/kubernetes/client-go/

client-go 是 Kubernets 官方提供的 go 语言客户端库,Kubernets 内部也使用该库进行通信,此外诸多基于 Kubernets 的第三方开发也使用该库,比如 etcd-operator 或者 prometheus-operator等。

client-go 当前版本是 6.0,支持到 Kubernets 1.9 版本。6.0 版本的变化情况可以参见官方blog 的文章Introducing client-go version 6,版本管理遵循 语义化版本 2.0.0

库的整体目录:

  • The kubernetes package contains the clientset to access Kubernetes API.
  • The discovery package is used to discover APIs supported by a Kubernetes API server.
  • The dynamic package contains a dynamic client that can perform generic operations on arbitrary Kubernetes API objects.
  • The transport package is used to set up auth and start a connection.
  • The tools/cache package is useful for writing controllers.

Compatibility matrix

Kubernetes 1.4 Kubernetes 1.5 Kubernetes 1.6 Kubernetes 1.7 Kubernetes 1.8 Kubernetes 1.9
client-go 1.4 - - - - -
client-go 1.5 + - - - - -
client-go 2.0 +- +- +- +- +-
client-go 3.0 +- +- - +- +-
client-go 4.0 +- +- +- +- +-
client-go 5.0 +- +- +- +- +-
client-go 6.0 +- +- +- +- +-
client-go HEAD +- +- +- +- +- +

Key:

  • Exactly the same features / API objects in both client-go and the Kubernetes version.
  • + client-go has features or API objects that may not be present in the Kubernetes cluster, either due to that client-go has additional new API, or that the server has removed old API. However, everything they have in common (i.e., most APIs) will work. Please note that alpha APIs may vanish or change significantly in a single release.
  • - The Kubernetes cluster has features the client-go library can’t use, either due to the server has additional new API, or that client-go has removed old API. However, everything they share in common (i.e., most APIs) will work.
client-go 3.0 Kubernetes main repo, 1.6 branch = -
client-go 4.0 Kubernetes main repo, 1.7 branch
client-go 5.0 Kubernetes main repo, 1.8 branch
client-go 6.0 Kubernetes main repo, 1.9 branch
client-go HEAD Kubernetes main repo, master branch

Key:

  • Changes in main Kubernetes repo are actively published to client-go by a bot
  • = Maintenance is manual, only severe security bugs will be patched.
  • - Deprecated; please upgrade.

推荐使用 glide进行管理:

先编写一个简单样例程序,如官方提供的在集群外访问集群内部pod程序:

1
2
$ glide init
# 根据提示完成输入

最后生成的样例如下:

1
2
3
4
5
6
7
8
9
10
11
package: github.com/DavadDi/k8s-client-go/out-of-cluster-client-configuration
import:
- package: k8s.io/apimachinery
subpackages:
- pkg/api/errors
- pkg/apis/meta/v1
- package: k8s.io/client-go
version: ^6.0.0
subpackages:
- kubernetes
- tools/clientcmd

更多的方式可以参见 INSTALL

如果在集群内部作为 Pod 使用样例参考:in-cluster-client-configuration ,如果运行在集群外部 out-of-cluster-client-configuration

Visit k8s Cluster Resource

样例代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
package main

import (
"flag"
"os"
"path/filepath"
"time"
"encoding/json"
"log"
"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)

var kubeconfig *string

func main() {
log.Println(*kubeconfig)
clientSet, err := creatClient("", *kubeconfig)
if err != nil {
log.Println("Create Kubernetes client failed", err)
return
}

// Do List and Find work
for {
// pods, err := clientSet.CoreV1().Pods(v1.NamespaceDefault).List(metav1.ListOptions{})
// curl -v http://127.0.0.1:8001/api/v1/namespaces/soa/pods/authhttp-57d998968d-hn5zb
pods, err := clientSet.CoreV1().Pods(v1.NamespaceAll).List(metav1.ListOptions{})
if err != nil {
log.Printf("ListPod faield for %s, error: %s\n", v1.NamespaceAll, err)
return
}

printPods(pods.Items)

pod, err := clientSet.CoreV1().Pods(v1.NamespaceDefault).Get("hello-4h7wt", metav1.GetOptions{})
if err != nil {
handleError(err)
} else {
data, _ := json.MarshalIndent(pod, "", " ")
log.Printf("Found pod %s\n", string(data))
}

time.Sleep(10 * time.Second)
}
}

// ....

TODO: 完整代码样例

Visit List/Watch k8s Cluster Resource

对于 Kubernets· 中 Pod、Endpoints、Service、Namespace的 ListWatcher 的使用,可以参见 Kubernets DNS相关的实现细节 kube-dnspkg/dns/dns.go , coreDNS k8s plugin

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
package main

import (
"flag"
"fmt"
"time"

"github.com/golang/glog"

"encoding/json"

"k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/util/runtime"
"k8s.io/apimachinery/pkg/util/wait"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/util/workqueue"
)

type Controller struct {
indexer cache.Indexer
queue workqueue.RateLimitingInterface
informer cache.Controller
}

type QueueItem struct {
Key string
Type cache.DeltaType
Object interface{}
}

func NewController(queue workqueue.RateLimitingInterface, indexer cache.Indexer, informer cache.Controller) *Controller {
return &Controller{
informer: informer,
indexer: indexer,
queue: queue,
}
}

func (c *Controller) processNextItem() bool {
// Wait until there is a new item in the working queue
item, quit := c.queue.Get()
if quit {
return false
}
// Tell the queue that we are done with processing this key. This unblocks the key for other workers
// This allows safe parallel processing because two endpoints with the same key are never processed in
// parallel.
defer c.queue.Done(item)

// Invoke the method containing the business logic
err := c.syncToStdout(item.(QueueItem))
// Handle the error if something went wrong during the execution of the business logic
c.handleErr(err, item)
return true
}

// syncToStdout is the business logic of the controller. In this controller it simply prints
// information about the endpoints to stdout. In case an error happened, it has to simply return the error.
// The retry logic should not be part of the business logic.
func (c *Controller) syncToStdout(item QueueItem) error {
obj, exists, err := c.indexer.GetByKey(item.Key)
if err != nil {
glog.Errorf("Fetching object with key %s from store failed with %v", item.Key, err)
return err
}

if !exists {
// Below we will warm up our cache with a endpoints, so that we will see a delete for one endpoints
endpoints, ok := item.Object.(*v1.Endpoints)
if !ok {
fmt.Printf("Key %s [%s]\n", item.Key, item.Type)
} else {
data, _ := json.MarshalIndent(item.Object, "", " ")
fmt.Printf("Endpoints %s [%s] key %s obj [%s]\n", endpoints.GetName(), item.Type, item.Key, string(data))
}

} else {
// Note that you also have to check the uid if you have a local controlled resource, which
// is dependent on the actual instance, to detect that a endpoints was recreated with the same name
endpoints := obj.(*v1.Endpoints)
fmt.Printf("Endpoints %s [%s]\n",
endpoints.GetName(), item.Type)
}
return nil
}

// handleErr checks if an error happened and makes sure we will retry later.
func (c *Controller) handleErr(err error, key interface{}) {
if err == nil {
// Forget about the #AddRateLimited history of the key on every successful synchronization.
// This ensures that future processing of updates for this key is not delayed because of
// an outdated error history.
c.queue.Forget(key)
return
}

// This controller retries 5 times if something goes wrong. After that, it stops trying.
if c.queue.NumRequeues(key) < 5 {
glog.Infof("Error syncing endpoints %v: %v", key, err)

// Re-enqueue the key rate limited. Based on the rate limiter on the
// queue and the re-enqueue history, the key will be processed later again.
c.queue.AddRateLimited(key)
return
}

c.queue.Forget(key)
// Report to an external entity that, even after several retries, we could not successfully process this key
runtime.HandleError(err)
glog.Infof("Dropping endpoints %q out of the queue: %v", key, err)
}

func (c *Controller) Run(threadiness int, stopCh chan struct{}) {
defer runtime.HandleCrash()

// Let the workers stop when we are done
defer c.queue.ShutDown()
glog.Info("Starting endpoints controller")

go c.informer.Run(stopCh)

// Wait for all involved caches to be synced, before processing items from the queue is started
if !cache.WaitForCacheSync(stopCh, c.informer.HasSynced) {
runtime.HandleError(fmt.Errorf("Timed out waiting for caches to sync"))
return
}

for i := 0; i < threadiness; i++ {
// 执行完 c.runWorker 后,每隔1秒继续执行,直到 stopCh 被关闭
go wait.Until(c.runWorker, time.Second, stopCh)
}

<-stopCh
glog.Info("Stopping endpoints controller")
}

func (c *Controller) runWorker() {
for c.processNextItem() {
}
}

func main() {
var kubeconfig string
var master string

flag.StringVar(&kubeconfig, "kubeconfig", "", "absolute path to the kubeconfig file")
flag.StringVar(&master, "master", "", "master url")
flag.Parse()

// creates the connection
config, err := clientcmd.BuildConfigFromFlags(master, kubeconfig)
if err != nil {
glog.Fatal(err)
}

// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
glog.Fatal(err)
}

// create the endpoints watcher
endpointsListWatcher := cache.NewListWatchFromClient(clientset.CoreV1().RESTClient(), "endpoints", "soa", fields.Everything())

// create the workqueue
queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())

// Bind the workqueue to a cache with the help of an informer. This way we make sure that
// whenever the cache is updated, the endpoints key is added to the workqueue.
// Note that when we finally process the item from the workqueue, we might see a newer version
// of the endpoints than the version which was responsible for triggering the update.
indexer, informer := cache.NewIndexerInformer(endpointsListWatcher, &v1.Endpoints{}, 0, cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
key, err := cache.MetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(QueueItem{key, cache.Added, nil})
}

},
UpdateFunc: func(old interface{}, new interface{}) {
key, err := cache.MetaNamespaceKeyFunc(new)
if err == nil {
queue.Add(QueueItem{key, cache.Updated, nil})
}

e1 := old.(*v1.Endpoints)
e2 := new.(*v1.Endpoints)
e1Data, _ := json.MarshalIndent(e1, "", " ")
e2Data, _ := json.MarshalIndent(e2, "", " ")
fmt.Println(string(e1Data), string(e2Data))
},

DeleteFunc: func(obj interface{}) {
// IndexerInformer uses a delta queue, therefore for deletes we have to use this
// key function.
// 如果是未知错误导致删除的对象为 DeletedFinalStateUnknown, 而不是对应的对象,所以处理的时候需要特别注意。
key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
if err == nil {
queue.Add(QueueItem{key, cache.Deleted, obj})
}
},
}, cache.Indexers{})

controller := NewController(queue, indexer, informer)

// Now let's start the controller
stop := make(chan struct{})
defer close(stop)
go controller.Run(1, stop)

// Wait forever
select {}
}

Sample Controller CDR

sample controller: github 地址: https://github.com/kubernetes/sample-controller,k8s 社区对于开发 Controller 也形成了一套 Pattern,具体参见:https://github.com/kubernetes/community/blob/8cafef897a22026d42f5e5bb3f104febe7e29830/contributors/devel/controllers.md

程序整体框架图:

sample controller running:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# assumes you have a working kubeconfig, not required if operating in-cluster
$ go run *.go -kubeconfig=$HOME/.kube/config -logtostderr=true
# 程序运行可能会报错 k8s.io/sample-controller/pkg/client/informers/externalversions/factory.go:73: Failed to list *v1alpha1.Foo: v1.ListOptions is not suitable for converting to "samplecontroller.k8s.io/v1alpha1",这是因为自定义的 CRD Foo还未被创建出来,所以会报错,在执行下面的创建指令后则可以正常工作。

# create a CustomResourceDefinition
$ kubectl create -f artifacts/examples/crd.yaml
# kubectl get crd
# kubectl get Foo.samplecontroller.k8s.io

# create a custom resource of type Foo
$ kubectl create -f artifacts/examples/example-foo.yaml

# check deployments created through the custom resource
$ kubectl get deployments
  1. [RBAC] got a message: User “” cannot list pods at the cluster scope.

此外由于 sample-controller 不同的分支,crd.yaml 文件注册的 apiVersion 可能不同,当 apiVersion 与 代码中的访问版本不匹配的时候,也不能够正常访问。例如 sample-controller 1.9 分支中 apiextensions.k8s.io/v1beta1, 那么代码中访问代码为 c.kubeclientset.AppsV1().Deployments(foo.Namespace).Create(newDeployment(foo)), 在 1.10 中 apiVersion 变为了 apiextensions.k8s.io/v1,所以需要特别注意。

Kubernets API访问模型

REST API 是Kubernetes 的基石,所有的操作和通信都是通过 API Server来处理调用的。

API versioning

Kubernetes 支持多版本 API,每个在不同的路径上比如:/api/v1 或者 /apis/extensions/v1beta1.

API 版本依次演进: Alpha(v1alpha1) -> Beta(v2beta3) -> Stable(vx)

API groups

API groups 主要目标是将单个 v1 API 接口按照细粒度的分组进行组织,方便与单独的启用或者停用,在不同的 groups 上支持不同的版本,可以独自演进,具体参见 api-group.md

当前使用的 API Group如下:

  1. Core (legacy) group,/api/v1 , apiVersion: v1

    两种使用方式:

  2. Named groups are at REST path /apis/$GROUP_NAME/$VERSION, and use apiVersion: $GROUP_NAME/$VERSION (for example, apiVersion: batch/v1).

    • 查看 apis
      1
      2
      3
      4
      5
      6
      $ http://127.0.0.1:8001/apis  可以查看到 APIGroupList
      {
      kind: "APIGroupList",
      apiVersion: "v1",
      groups: [xxxx]
      }
  3. xxx

当前有两种支持的方式来扩展 API:

  1. 自定义资源(CustomResourceDefinition),主要用于简单的 CRUD;早期版本叫做ThirdPartyResources

    ,1.7版本引入 Alpha, 1.8 版本 Beta, 自定义资源必须实现 runtime.Object 接口,可以借助 k8s.io/code-generator 来实现相关代码生成,具体可以参见 openshift 的样例仓库: openshift-evangelists/crd-code-generation

  2. 进行中的 API Server 扩展功能,通过 aggregator 来进行分发,对于客户端透明;

Controlling Access to the Kubernetes API

一个 API 的路径上回经历不同的验证阶段,目前主要有三个阶段如下图:

Authentication

详细的各种类型认证参见 Authenticating. 该阶段主要是对于用户传递的用户名密码、证书、 Plain Tokens、Bootstrap Tokens和JWT Tokens(service accounts)的合法性进行检查。如果认证失败,则返回 401;如果用户认证成功,用户被认证为特定的 username, 用于后续的验证步骤,Kubernets 仅仅使用 username 用于访问的控制,内部并不存在对应的 user 对象来保存,也不保存 username 相关的信息。

Authorization

在此阶段,一个请求必须包括请求的 usernameaction以及 action 影响到的资源对象。如果当前存在策略允许用户具备权限完成对应的操作,那么授权则通过。

如果 Bob具备以下的策略:

1
2
3
4
5
6
7
8
9
10
{
"apiVersion": "abac.authorization.kubernetes.io/v1beta1",
"kind": "Policy",
"spec": {
"user": "bob",
"namespace": "projectCaribou",
"resource": "pods",
"readonly": true
}
}

那么以下的访问则可以通过授权:

1
2
3
4
5
6
7
8
9
10
11
12
{
"apiVersion": "authorization.k8s.io/v1beta1",
"kind": "SubjectAccessReview",
"spec": {
"resourceAttributes": {
"namespace": "projectCaribou",
"verb": "get",
"group": "unicorn.example.org",
"resource": "pods"
}
}
}

Kubernets 支持多种授权模块:比如 Node、ABAC、RBAC 和 Webhook 方式。使用的授权模块可以在 API Server启动的时候进行指定,参数 --authorization-mode=。Kubernets 会依次检查设置的授权模块,如果任何一个能够满足授权需求,则通过,如果所有的授权模块都不满足,则会拒绝请求,返回403禁止访问。更加详细的介绍参见 Accessing Control Overview

Kubernetes reviews only the following API request attributes:

  • user - The user string provided during authentication.
  • group - The list of group names to which the authenticated user belongs.
  • “extra” - A map of arbitrary string keys to string values, provided by the authentication layer.
  • API - Indicates whether the request is for an API resource.
  • Request path - Path to miscellaneous non-resource endpoints like /api or /healthz.
  • API request verb - API verbs get, list, create, update, patch, watch, proxy, redirect, delete, and deletecollection are used for resource requests. To determine the request verb for a resource API endpoint, see Determine the request verb below.
  • HTTP request verb - HTTP verbs get, post, put, and delete are used for non-resource requests.
  • Resource - The ID or name of the resource that is being accessed (for resource requests only) – For resource requests using get, update, patch, and delete verbs, you must provide the resource name.
  • Subresource - The subresource that is being accessed (for resource requests only).
  • Namespace - The namespace of the object that is being accessed (for namespaced resource requests only).
  • API group - The API group being accessed (for resource requests only). An empty string designates the core API group.
HTTP verb request verb
POST create
GET, HEAD get (for individual resources), list (for collections)
PUT update
PATCH patch
DELETE delete (for individual resources), deletecollection (for collections)

kubectl 提供 子命令 auth can-i 来快速验证,使用 SelfSubjectAccessReview API 来确定当前用户是否具备某些操作。例如:

1
2
3
4
$ kubectl auth can-i create deployments --namespace dev
yes
$ kubectl auth can-i create deployments --namespace prod
no

也可以通过创建 Kubernets 资源的方式来使用:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ kubectl create -f - -o yaml << EOF
apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
spec:
resourceAttributes:
group: apps
name: deployments
verb: create
namespace: dev
EOF

apiVersion: authorization.k8s.io/v1
kind: SelfSubjectAccessReview
metadata:
creationTimestamp: null
spec:
resourceAttributes:
group: apps
name: deployments
namespace: dev
verb: create
status:
allowed: true
denied: false

Admission Control

Adminssion Control 模块可以修改或者拒绝请求。除了 Authorization 模块涉及的属性外,Adminssion Control 模块还可以访问创建或者更新的对象内容。模块会对创建、删除、更新、连接(代理)的对象产生租用,但是不包括读取操作。Adminssion Control可以配置多个,按照顺序依次调用。

与 Authentication 和 Authorization 模块不同,如果任何Adminssion Control模块拒绝,则该请求立即被拒绝。除了拒绝对象之外,Adminssion Control还可以为字段设置复杂的默认值。更加具体的介绍参见 Using Admission Controllers 章节。一旦请求通过了所有Adminssion Control,就会使用相应API对象的验证进行验证,然后写入对象存储库(如步骤4所示)。

1
$ kube-apiserver --enable-admission-plugins=NamespaceLifecyle,LimitRanger ...

1.9 版本以上推荐使用的列表如下:

1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

Adminssion Control 过程分两个阶段进行。 在第一阶段,运行突变(mutating)准入控制器。 在第二阶段,验证(validating)Adminssion Control 运行。 某些控制器两个阶段都有。

由于早期模型存在以下限制:

  1. 必须编译到 kube-apiserver中

  2. 只能在 apiserver 启动的时候配置

为了提升 Admission Control的灵活性,从 1.7 版本开始,提供新的特性 Initializers and External Admission Webhooks, 允许admission controllers 可以在外部进行开发,在运行时候进行配置,具体参见 Dynamic Admission Control, 1.9 版本中 Admission Webhooks (beta in 1.9) and Initializers (alpha)。

admission webhooks: http回调地址,接受 admission 的请求并处理相关请求内容,有两种类型: ValidatingAdmissionWebhooksMutatingAdmissionWebhooks。具体样例可以参考:caesarxuchao/example-webhook-admission-controller, 样例程序展示了一个如何通过 admission controller 来限制用户只能从特定的 Docker Registry Repo 下载镜像。

CRD

创建和管理CRD的client库位于:https://github.com/kubernetes/apiextensions-apiserver

API server for API extensions like CustomResourceDefinitions

It provides an API for registering CustomResourceDefinitions

一开始我一直很执着地认为 CRD 的注册函数应该位于 client-go 这个库中,而且官方提供的 controller-sample 也只是演示了如何通过 k8s.io/code-generator 进行生成 CRD 客户端读取代码,样例中的资源创建都是通过编写 yaml文件,通过 kubectl 来进行创建的,主要演示步骤如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
# assumes you have a working kubeconfig, not required if operating in-cluster
# 运行程序,来进行CRD的读取
$ go run *.go -kubeconfig=$HOME/.kube/config

# 通过 kubectl 手工创建 CRD
# create a CustomResourceDefinition
$ kubectl create -f artifacts/examples/crd.yaml

# create a custom resource of type Foo
$ kubectl create -f artifacts/examples/example-foo.yaml

# check deployments created through the custom resource
$ kubectl get deployments

后来参考了一个 operator-kit (A library for creating a Kubernetes Operator),提供 sample-operator 中进行 CRD 注册的部分也是采用的 apiextensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" 来进行连接并调用

1
APIExtensionClientset.ApiextensionsV1beta1().CustomResourceDefinitions().Create(crd)

我曾经尝试将 apiextensions-apiserver/pkg/client/clientset/clientsetclient-go 原生的连接进行共同使用,也就是想通过 apiextensions-apiserver 来进行创建CRD,而同时使用 client-go 提供的客户端功能进行读取,结果就遇到包依赖的各种问题,真正原因是: apiextensions-apiserver 底层也依赖于 client-go, 如果我们的程序同时使用 apiextensions-apiserverclient-go会造成资源定义位于不同的包,从而报错:

1
2
3
4
graph TD;
sampleController-->apiextensions_apiserver;
apiextensions_apiserver-->client-go;
sampleController-->client-go;

直到后面我看到 Including CRD client into client-go repo #247, 才明白两者当前是分开实现的,如果同时使用的话则会造成包依赖出问题。

databases-crd.yaml

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: databases.example.com
spec:
group: example.com
names:
kind: Database
listKind: DatabaseList
plural: databases
scope: Namespaced
version: v1
1
$ kubectl get crd databases.example.com -o yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
creationTimestamp: 2018-03-29T03:22:19Z
generation: 1
name: databases.example.com
resourceVersion: "55196245"
selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/databases.example.com
uid: 6322bc2d-3300-11e8-a6d0-00163e000fbf
spec:
group: example.com
names:
kind: Database
listKind: DatabaseList
plural: databases
singular: database
scope: Namespaced
version: v1
status:
acceptedNames:
kind: Database
listKind: DatabaseList
plural: databases
singular: database
conditions:
- lastTransitionTime: null
message: no conflicts found
reason: NoConflicts
status: "True"
type: NamesAccepted
- lastTransitionTime: 2018-03-29T03:22:19Z
message: the initial names have been accepted
reason: InitialNamesAccepted
status: "True"
type: Established
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ curl  http://127.0.0.1:8001/apis/example.com
{
"kind": "APIGroup",
"apiVersion": "v1",
"name": "example.com",
"versions": [
{
"groupVersion": "example.com/v1",
"version": "v1"
}
],
"preferredVersion": {
"groupVersion": "example.com/v1",
"version": "v1"
},
"serverAddressByClientCIDRs": null
}

为新创建的 CRD 创建一个实例

1
2
3
4
5
6
7
8
9
$ cat wordpress-database.yaml
apiVersion: example.com/v1
kind: Database
metadata:
name: wordpress
spec:
user: wp
password: secret
encoding: unicode
1
2
$ kubectl create -f wordpress-database.yaml
database "wordpress" created
1
2
3
$ kubectl get databases.example.com
NAME AGE
wordpress 24s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
$ curl http://127.0.0.1:8001/apis/example.com/v1/namespaces/default/databases
{
  "apiVersion":"example.com/v1",
  "items":[
    {
      "apiVersion":"example.com/v1",
      "kind":"Database",
      "metadata":{
        "clusterName":"",
        "creationTimestamp":"2018-03-29T03:35:18Z",
        "deletionGracePeriodSeconds":null,
        "deletionTimestamp":null,
        "initializers":null,
        "name":"wordpress",
        "namespace":"default",
        "resourceVersion":"55197116",
        "selfLink":"/apis/example.com/v1/namespaces/default/databases/wordpress",
        "uid":"338142cb-3302-11e8-a6d0-00163e000fbf"
      },
      "spec":{
        "encoding":"unicode",
        "password":"secret",
        "user":"wp"
      }
    }
  ],
  "kind":"DatabaseList",
  "metadata":{
    "continue":"",
    "resourceVersion":"55197242",
    "selfLink":"/apis/example.com/v1/namespaces/default/databases"
  }
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$ curl http://127.0.0.1:8001/apis/example.com/v1/namespaces/default/databases/wordpress
{
  "apiVersion":"example.com/v1",
  "kind":"Database",
  "metadata":{
    "clusterName":"",
    "creationTimestamp":"2018-03-29T03:35:18Z",
    "deletionGracePeriodSeconds":null,
    "deletionTimestamp":null,
    "initializers":null,
    "name":"wordpress",
    "namespace":"default",
    "resourceVersion":"55197116",
    "selfLink":"/apis/example.com/v1/namespaces/default/databases/wordpress",
    "uid":"338142cb-3302-11e8-a6d0-00163e000fbf"
  },
  "spec":{
    "encoding":"unicode",
    "password":"secret",
    "user":"wp"
  }
}

基于 CRD 开发的 Controller 一般称作 Operator, 简单样例参见 Rook 的具体样例 https://github.com/rook/operator-kit/tree/master/sample-operator

快速搭建 minikube 环境

当前 minikube 0.25.2 , k8s支持 1.9.4

minikube-darwin-amd64 0.25.2,命令行安装:

1
2
3
4
5
6
7
8
9
10
11
12
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

minikube start --kubernetes-version v1.9.4

# 参考
# minikube --bootstrapper=kubeadm --extra-config=kubelet.authorization-mode=AlwaysAllow --kubernetes-version=v1.8.5 start

# minikube 配置 Kubernets 的参数 参见 https://sourcegraph.com/github.com/kubernetes/minikube@master/-/blob/docs/configuring_kubernetes.md

# 官方网址地址 https://kubernetes.io/docs/getting-started-guides/minikube/

--extra-config=apiserver.AllowPrivileged=true

参考

  1. 语义化版本 2.0.0
  2. Kubernetes Deep Dive: Code Generation for CustomResources
  3. Kubernetes deep dive: API Server – part 1
  4. Kubernetes Deep Dive: API Server – Part 2
  5. Kubernetes Deep Dive: API Server – Part 3a
  6. Extend the Kubernetes API with CustomResourceDefinitions
  7. Extending Kubernetes 101
  8. 开发operator扩展kubernetes 调研整理 Github 地址
  9. An Introduction to Extending Kubernetes with CustomResourceDefinitions
  10. Operator kit: Library to create a custom controller
  11. localkube consumes CPU when system is “idle” #1158
  12. Extend Kubernetes 1.7 with Custom Resources 仓库 https://github.com/yaronha/kube-crd

除特别声明本站文章均属原创(翻译内容除外),如需要转载请事先联系,转载需要注明作者原文链接地址。


CATALOG
  1. 1. 使用 client-go 开发 controller
    1. 1.1. client-go 介绍
      1. 1.1.0.1. Compatibility matrix
  2. 1.2. Visit k8s Cluster Resource
  3. 1.3. Visit List/Watch k8s Cluster Resource
  4. 1.4. Sample Controller CDR
  5. 1.5. Kubernets API访问模型
    1. 1.5.1. API versioning
    2. 1.5.2. API groups
    3. 1.5.3. Controlling Access to the Kubernetes API
  6. 1.6. CRD
    1. 1.6.0.1. databases-crd.yaml
  • 1.7. 快速搭建 minikube 环境
  • 1.8. 参考