File tree 37 files changed +64
-105
lines changed
37 files changed +64
-105
lines changed Original file line number Diff line number Diff line change 16
16
17
17
![ ] ( ../.gitbook/assets/CloudNativeLandscape_Serverless_latest.png )
18
18
19
- 图片来源:[ https://s.cncf.io ] ( https://s.cncf.io ) 。
20
-
19
+ 图片来源:[ https://s.cncf.io ] ( https://landscape.cncf.io/serverless ) 。
Original file line number Diff line number Diff line change 24
24
* [ Awesome Kubernetes] ( https://github.com/ramitsurana/awesome-kubernetes )
25
25
* [ Awesome Docker] ( https://github.com/veggiemonk/awesome-docker )
26
26
* [ OperatorHub.io] ( https://www.operatorhub.io/ )
27
-
Original file line number Diff line number Diff line change @@ -193,5 +193,4 @@ $ argo -n argo delete hello-world-4dhg8
193
193
Workflow ' hello-world-4dhg8' deleted
194
194
```
195
195
196
- 更多工作流 YAML 的格式见[ 官方文档] ( https://applatix.com/open-source/argo/docs/argo_v2_yaml.html ) 和[ 工作流示例] ( https://github.com/argoproj/argo/tree/master/examples ) 。
197
-
196
+ 更多工作流 YAML 的格式见[ 官方文档] ( https://argoproj.github.io/argo-workflows/ ) 。
Original file line number Diff line number Diff line change @@ -13,7 +13,7 @@ Draft 主要由三个命令组成
13
13
由于 Draft 需要构建镜像并部署应用到 Kubernetes 集群,因而在安装 Draft 之前需要
14
14
15
15
* 部署一个 Kubernetes 集群,部署方法可以参考 [ kubernetes 部署方法] ( ../../setup/index.md )
16
- * 安装并初始化 helm(需要 v2.4.x 版本,并且不要忘记运行 ` helm init ` ),具体步骤可以参考 [ helm 使用方法] ( https://github.com/feiskyer/kubernetes-handbook/tree/549e0e3c9ba0175e64b2d4719b5a46e9016d532b/ apps/helm-app .md)
16
+ * 安装并初始化 helm(需要 v2.4.x 版本,并且不要忘记运行 ` helm init ` ),具体步骤可以参考 [ helm 使用方法] ( ../../ apps/index/ helm.md)
17
17
* 注册 docker registry 账号,比如 [ Docker Hub] ( https://hub.docker.com/ ) 或[ Quay.io] ( https://quay.io/ )
18
18
* 配置 Ingress Controller 并在 DNS 中设置通配符域 ` * ` 的 A 记录(如 ` *.draft.example.com ` )到 Ingress IP 地址。最简单的 Ingress Controller 创建方式是使用 helm:
19
19
@@ -118,4 +118,3 @@ Watching local files for changes...
118
118
$ curl virulent-sheep.app.feisky.xyz
119
119
Hello, World!
120
120
` ` `
121
-
Original file line number Diff line number Diff line change @@ -281,5 +281,3 @@ $ kubectl apply -f <(linkerd-inject -f <your k8s config>.yml -linkerdPort 4140)
281
281
* [A SERVICE MESH FOR KUBERNETES](https://buoyant.io/2016/10/04/a-service-mesh-for-kubernetes-part-i-top-line-service-metrics/)
282
282
* [Linkerd examples](https://github.com/linkerd/linkerd-examples)
283
283
* [Service Mesh Pattern](http://philcalcado.com/2017/08/03/pattern_service_mesh.html)
284
- * [https://conduit.io](https://conduit.io)
285
-
Original file line number Diff line number Diff line change @@ -38,7 +38,7 @@ prometheus ClusterIP 10.0.205.82 <none> 9090/TCP 163m
38
38
proxy-api ClusterIP 10.0.170.201 < none> 8086/TCP 163m
39
39
web ClusterIP 10.0.88.136 < none> 8084/TCP,9994/TCP 163m
40
40
41
- $ kubectl -n linkerd get pod
41
+ $ kubectl -n linkerd get pod
42
42
NAME READY STATUS RESTARTS AGE
43
43
controller-67489d768d-75wjz 5/5 Running 0 163m
44
44
grafana-5df745d8b8-pv6tf 2/2 Running 0 163m
@@ -95,4 +95,3 @@ end id=0:810 src=10.244.6.239:57202 dst=10.244.1.237:8080 grpc-status=OK duratio
95
95
* [ A SERVICE MESH FOR KUBERNETES] ( https://buoyant.io/2016/10/04/a-service-mesh-for-kubernetes-part-i-top-line-service-metrics/ )
96
96
* [ Service Mesh Pattern] ( http://philcalcado.com/2017/08/03/pattern_service_mesh.html )
97
97
* [ https://linkerd.io/2/overview/ ] ( https://linkerd.io/2/overview/ )
98
-
Original file line number Diff line number Diff line change @@ -8,7 +8,7 @@ Deployment 已经内置了 RollingUpdate strategy,因此不用再调用 `kubec
8
8
9
9
Rolling Update 适用于 ` Deployment ` 、` Replication Controller ` ,官方推荐使用 Deployment 而不再使用 Replication Controller。
10
10
11
- 使用 ReplicationController 时的滚动升级请参考官网说明:[ https://kubernetes.io/docs/tasks/run-application/ rolling-update-replication-controller/ ] ( https://kubernetes.io/docs/tasks/run-application/ rolling-update-replication-controller/ )
11
+ 使用 ReplicationController 时的滚动升级请参考官网说明:[ https://kubernetes.io/docs/concepts/workloads/controllers/deployment/# rolling-update-deployment ] ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/# rolling-update-deployment )
12
12
13
13
## ReplicationController 与 Deployment 的关系
14
14
@@ -101,7 +101,7 @@ clean:
101
101
rm -f hello${TAG}
102
102
```
103
103
104
- ** 编译**
104
+ ** 编译**
105
105
106
106
``` text
107
107
make all
@@ -154,7 +154,7 @@ spec:
154
154
kubectl create -f rolling-update-test.yaml
155
155
```
156
156
157
- ** 修改 traefik ingress 配置**
157
+ ** 修改 traefik ingress 配置**
158
158
159
159
在 ` ingress.yaml ` 文件中增加新 service 的配置。
160
160
@@ -182,7 +182,7 @@ kubectl create -f rolling-update-test.yaml
182
182
This is version 1.
183
183
```
184
184
185
- ** 滚动升级**
185
+ ** 滚动升级**
186
186
187
187
只需要将 ` rolling-update-test.yaml ` 文件中的 ` image ` 改成新版本的镜像名,然后执行:
188
188
@@ -237,4 +237,3 @@ replicationcontroller "zeppelin-controller" rolling updated
237
237
* [Running a Stateless Application Using a Deployment](https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/)
238
238
* [Simple Rolling Update](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/cli/simple-rolling-update.md)
239
239
* [使用 kubernetes 的 deployment 进行 RollingUpdate](https://segmentfault.com/a/1190000008232770)
240
-
Original file line number Diff line number Diff line change 8
8
* [ istio-dev@] ( https://groups.google.com/forum/#!forum/istio-dev )
9
9
* [ istio-announce@] ( https://groups.google.com/forum/#!forum/istio-announce )
10
10
* [ Twitter] ( https://twitter.com/IstioMesh )
11
- * [ Rocket.Chat] ( https://istio.rocket.chat/home )
12
11
* [ FAQ] ( https://istio.io/faq/ )
13
12
* [ 术语表] ( https://istio.io/docs/reference/glossary/ )
14
-
Original file line number Diff line number Diff line change @@ -60,6 +60,5 @@ hack/cherry_pick_pull.sh upstream/release-1.7 51870
60
60
* [ ** Kubernetes Contributor Documentation** ] ( https://www.kubernetes.dev/docs/ )
61
61
* [ Special Interest Groups] ( https://github.com/kubernetes/community )
62
62
* [ Feature Tracking and Backlog] ( https://github.com/kubernetes/features )
63
- * [ Community Expectations] ( https://github.com/kubernetes/community/blob/master/contributors/guide/community- expectations.md )
63
+ * [ Community Expectations] ( https://github.com/kubernetes/community/blob/master/contributors/guide/expectations.md )
64
64
* [ Kubernetes release managers] ( https://github.com/kubernetes/sig-release/blob/master/release-managers.md )
65
-
Original file line number Diff line number Diff line change @@ -188,7 +188,6 @@ Etcd v3 对过期机制也做了改进,过期时间设置在 lease 上,然
188
188
189
189
* [ Etcd website] ( https://coreos.com/etcd/ )
190
190
* [ Etcd github] ( https://github.com/coreos/etcd/ )
191
- * [ Projects using etcd] ( https://github.com/coreos/etcd/blob/master/Documentation/production-users.md )
191
+ * [ Projects using etcd] ( https://etcd.io/docs/v3.5/integrations/#projects-using-etcd )
192
192
* [ http://jolestar.com/etcd-architecture/ ] ( http://jolestar.com/etcd-architecture/ )
193
193
* [ etcd 从应用场景到实现原理的全方位解读] ( http://www.infoq.com/cn/articles/etcd-interpretation-application-scenario-implement-principle )
194
-
Original file line number Diff line number Diff line change @@ -299,6 +299,5 @@ $ kubectl delete ns federation-system
299
299
300
300
# # 参考文档
301
301
302
- * [Kubernetes federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/)
303
- * [kubefed](https://kubernetes.io/docs/tasks/federation/set-up-cluster-federation-kubefed/)
304
-
302
+ * [Kubernetes federation](https://kubernetes.io/blog/2018/12/12/kubernetes-federation-evolution/)
303
+ * [kubefed](https://github.com/kubernetes-sigs/kubefed)
Original file line number Diff line number Diff line change @@ -88,7 +88,7 @@ Kubelet 定期(`housekeeping-interval`)检查系统的资源是否达到了
88
88
89
89
| Eviction Signal | Condition | Description |
90
90
| :--- | :--- | :--- |
91
- | ` memory.available ` | MemoryPressue | ` memory.available ` := ` node.status.capacity[memory] ` - ` node.stats.memory.workingSet ` (计算方法参考[ 这里] ( https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/ memory-available.sh ) ) |
91
+ | ` memory.available ` | MemoryPressue | ` memory.available ` := ` node.status.capacity[memory] ` - ` node.stats.memory.workingSet ` (计算方法参考[ 这里] ( https://kubernetes.io/docs/tasks/administer-cluster/memory-available.sh ) ) |
92
92
| ` nodefs.available ` | DiskPressure | ` nodefs.available ` := ` node.stats.fs.available ` (Kubelet Volume以及日志等) |
93
93
| ` nodefs.inodesFree ` | DiskPressure | ` nodefs.inodesFree ` := ` node.stats.fs.inodesFree ` |
94
94
| ` imagefs.available ` | DiskPressure | ` imagefs.available ` := ` node.stats.runtime.imagefs.available ` (镜像以及容器可写层等) |
Original file line number Diff line number Diff line change @@ -719,7 +719,7 @@ Deployment 也需要 [`.spec` section](https://github.com/kubernetes/community/b
719
719
720
720
` .spec.template ` 是 [ pod template] ( https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#pod-template ) . 它跟 [ Pod] ( https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ ) 有一模一样的 schema,除了它是嵌套的并且不需要 ` apiVersion ` 和 ` kind ` 字段。
721
721
722
- 另外为了划分 Pod 的范围,Deployment 中的 pod template 必须指定适当的 label(不要跟其他 controller 重复了,参考 [ selector] ( https://github.com/ kubernetes/kubernetes.github. io/blob/master/ docs/concepts/workloads/controllers/deployment.md #selector ) )和适当的重启策略。
722
+ 另外为了划分 Pod 的范围,Deployment 中的 pod template 必须指定适当的 label(不要跟其他 controller 重复了,参考 [ selector] ( https://kubernetes. io/docs/concepts/workloads/controllers/deployment/ #selector ) )和适当的重启策略。
723
723
724
724
[ ` .spec.template.spec.restartPolicy ` ] ( https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/ ) 可以设置为 ` Always ` , 如果不指定的话这就是默认配置。
725
725
@@ -749,7 +749,7 @@ Deployment 也需要 [`.spec` section](https://github.com/kubernetes/community/b
749
749
750
750
#### Rolling Update Deployment
751
751
752
- ` .spec.strategy.type==RollingUpdate ` 时,Deployment 使用 [ rolling update] ( https://kubernetes.io/docs/tasks/run-application/ rolling-update-replication-controller/ ) 的方式更新 Pod 。你可以指定 ` maxUnavailable ` 和 ` maxSurge ` 来控制 rolling update 进程。
752
+ ` .spec.strategy.type==RollingUpdate ` 时,Deployment 使用 [ rolling update] ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/# rolling-update-deployment ) 的方式更新 Pod 。你可以指定 ` maxUnavailable ` 和 ` maxSurge ` 来控制 rolling update 进程。
753
753
754
754
** Max Unavailable**
755
755
@@ -797,5 +797,4 @@ Deployment revision history 存储在它控制的 ReplicaSets 中。
797
797
798
798
### kubectl rolling update
799
799
800
- [ Kubectl rolling update] ( https://kubernetes.io/docs/tasks/run-application/rolling-update-replication-controller/ ) 虽然使用类似的方式更新 Pod 和 ReplicationController。但是我们推荐使用 Deployment,因为它是声明式的,客户端侧,具有附加特性,例如即使滚动升级结束后也可以回滚到任何历史版本。
801
-
800
+ [ Kubectl rolling update] ( https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment ) 虽然使用类似的方式更新 Pod 和 ReplicationController。但是我们推荐使用 Deployment,因为它是声明式的,客户端侧,具有附加特性,例如即使滚动升级结束后也可以回滚到任何历史版本。
Original file line number Diff line number Diff line change @@ -273,7 +273,5 @@ spec:
273
273
* [Kubernetes Ingress Controller](https://github.com/kubernetes/ingress/tree/master)
274
274
* [使用 NGINX Plus 负载均衡 Kubernetes 服务](http://dockone.io/article/957)
275
275
* [使用 NGINX 和 NGINX Plus 的 Ingress Controller 进行 Kubernetes 的负载均衡](http://www.cnblogs.com/276815076/p/6407101.html)
276
- * [Kubernetes : Ingress Controller with Træfɪk and Let's Encrypt](https://blog.osones.com/en/kubernetes-ingress-controller-with-traefik-and-lets-encrypt.html)
277
- * [Kubernetes : Træfɪk and Let's Encrypt at scale](https://blog.osones.com/en/kubernetes-traefik-and-lets-encrypt-at-scale.html)
278
- * [Kubernetes Ingress Controller-Træfɪk](https://docs.traefik.io/user-guide/kubernetes/)
279
- * [Kubernetes 1.2 and simplifying advanced networking with Ingress](http://blog.kubernetes.io/2016/03/Kubernetes-1.2-and-simplifying-advanced-networking-with-Ingress.html)
276
+ * [Kubernetes Ingress Controller-Træfɪk](https://doc.traefik.io/traefik/providers/kubernetes-ingress/)
277
+ * [Kubernetes 1.2 and simplifying advanced networking with Ingress](https://kubernetes.io/blog/2016/03/kubernetes-1-2-and-simplifying-advanced-networking-with-ingress/)
Original file line number Diff line number Diff line change @@ -409,7 +409,7 @@ apiVersion: apps/v1
409
409
kind: StatefulSet
410
410
metadata:
411
411
name: web
412
- spec:
412
+ spec:
413
413
serviceName: "nginx"
414
414
replicas: 2
415
415
selector:
@@ -484,9 +484,9 @@ logs-web-1 us-central1-a
484
484
存储快照是 v1.12 新增的 Alpha 特性,用来支持给存储卷创建快照。支持的插件包括
485
485
486
486
* [GCE Persistent Disk CSI Driver](https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver)
487
- * [OpenSDS CSI Driver](https://github.com/ opensds/nbp/tree/master/ csi/server )
488
- * [Ceph RBD CSI Driver](https://github.com/ceph/ceph-csi/tree/master/pkg/rbd )
489
- * [Portworx CSI Driver](https://github. com/libopenstorage/openstorage/tree/master/ csi)
487
+ * [OpenSDS CSI Driver](https://docs. opensds.io/guides/user-guides/csi/ceph- csi/)
488
+ * [Ceph RBD CSI Driver](https://github.com/ceph/ceph-csi)
489
+ * [Portworx CSI Driver](https://docs.portworx. com/portworx- csi-driver/ )
490
490
491
491

492
492
@@ -547,4 +547,3 @@ spec:
547
547
* [Dynamic Volume Provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/)
548
548
* [Kubernetes CSI Documentation](https://kubernetes-csi.github.io/docs/)
549
549
* [Volume Snapshots Documentation](https://kubernetes.io/docs/concepts/storage/volume-snapshots/)
550
-
Original file line number Diff line number Diff line change @@ -10,7 +10,7 @@ Kubernetes 提供了三种配置 Security Context 的方法:
10
10
11
11
## Container-level Security Context
12
12
13
- [ Container-level Security Context] ( https://kubernetes.io/docs/api-reference/v1.15/#securitycontext-v1-core ) 仅应用到指定的容器上,并且不会影响 Volume。比如设置容器运行在特权模式:
13
+ [ Container-level Security Context] ( https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ) 仅应用到指定的容器上,并且不会影响 Volume。比如设置容器运行在特权模式:
14
14
15
15
``` yaml
16
16
apiVersion : v1
28
28
29
29
## Pod-level Security Context
30
30
31
- [Pod-level Security Context](https://kubernetes.io/docs/api-reference/v1.15/#podsecuritycontext-v1-core ) 应用到 Pod 内所有容器,并且还会影响 Volume(包括 fsGroup 和 selinuxOptions)。
31
+ [Pod-level Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ ) 应用到 Pod 内所有容器,并且还会影响 Volume(包括 fsGroup 和 selinuxOptions)。
32
32
33
33
` ` ` yaml
34
34
apiVersion : v1
Original file line number Diff line number Diff line change @@ -297,7 +297,7 @@ spec:
297
297
298
298
注意 Ingress 本身并不会自动创建负载均衡器,cluster 中需要运行一个 ingress controller 来根据 Ingress 的定义来管理负载均衡器。目前社区提供了 nginx 和 gce 的参考实现。
299
299
300
- Traefik 提供了易用的 Ingress Controller,使用方法见 [https://docs .traefik.io/user-guide/ kubernetes/](https://docs .traefik.io/user-guide/ kubernetes/)。
300
+ Traefik 提供了易用的 Ingress Controller,使用方法见 [https://doc .traefik.io/traefik/providers/ kubernetes-ingress /](https://doc .traefik.io/traefik/providers/ kubernetes-ingress /)。
301
301
302
302
更多 Ingress 和 Ingress Controller 的介绍参见 [ingress](ingress.md)。
303
303
Original file line number Diff line number Diff line change 1
1
# CONTRIBUTING
2
2
3
3
1 . [ Fork on Github] ( https://github.com/feiskyer/kubernetes-handbook/fork ) .
4
- 2 . Clone the forked repo: ` git clone https://github.com/<user-name>/kubernetes-handbook -b en ` .
4
+ 2 . Clone the forked repo: ` git clone https://github.com/<user-name>/kubernetes-handbook ` .
5
5
3 . Create a new branch, add some changes and commit: ` git checkout -b new-branch ` .
6
6
4 . Push the branch to github: ` git commit -am "comments"; git push ` .
7
- 5 . File a Pull Request on Github to ` en ` branch .
7
+ 5 . File a Pull Request on Github.
Original file line number Diff line number Diff line change @@ -22,7 +22,7 @@ ABAC(Attribute Based Access Control)本来是不错的概念,但是在 Kub
22
22
23
23
## 基础概念
24
24
25
- 需要理解 RBAC 一些基础的概念和思路,RBAC 是让用户能够访问 [ Kubernetes API 资源] ( https://kubernetes.io/docs/api- reference/v1.15 / ) 的授权方式。
25
+ 需要理解 RBAC 一些基础的概念和思路,RBAC 是让用户能够访问 [ Kubernetes API 资源] ( https://kubernetes.io/docs/reference/ ) 的授权方式。
26
26
27
27
![ RBAC 架 ;构 ;图 ; 1] ( ../../.gitbook/assets/rbac1%20%281%29.png )
28
28
@@ -230,8 +230,7 @@ kubectl create clusterrolebinding permissive-binding \
230
230
# # 参考文档
231
231
232
232
* [RBAC documentation](https://kubernetes.io/docs/admin/authorization/rbac/)
233
+ * [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)
233
234
* [Google Cloud Next talks 1](https://www.youtube.com/watch?v=Cd4JU7qzYbE#t=8m01s%20)
234
235
* [Google Cloud Next talks 2](https://www.youtube.com/watch?v=18P7cFc6nTU#t=41m06s%20)
235
236
* [在 Kubernetes Pod 中使用 Service Account 访问 API Server](http://tonybai.com/2017/03/03/access-api-server-from-a-pod-through-serviceaccount/)
236
- * 部分翻译自 [RBAC Support in Kubernetes](http://blog.kubernetes.io/2017/04/rbac-support-in-kubernetes.html)(转载自[kubernetes 中文社区](https://www.kubernetes.org.cn/1879.html),译者催总,[Jimmy Song](http://rootsongjc.github.com/about) 做了稍许修改)
237
-
Original file line number Diff line number Diff line change @@ -67,5 +67,4 @@ Kubernetes 的 Cloud Provider 目前正在重构中
67
67
* 配置 kube-controller-manager ` --cloud-provider=external `
68
68
* 启动 ` cloud-controller-manager `
69
69
70
- 具体实现方法可以参考 [ rancher-cloud-controller-manager] ( https://github.com/rancher/rancher-cloud-controller-manager ) 和 [ cloud-controller-manager] ( https://github.com/kubernetes/kubernetes/blob/master/cmd/cloud-controller-manager/controller-manager.go ) 。
71
-
70
+ 具体实现方法可以参考 [ rancher-cloud-controller-manager] ( https://github.com/rancher/rancher-cloud-controller-manager ) 和 [ cloud-controller-manager] ( https://github.com/kubernetes/cloud-provider ) 。
Original file line number Diff line number Diff line change @@ -16,12 +16,11 @@ helm install stable/nginx-ingress --name nginx-ingress --set rbac.create=true
16
16
17
17
* [ HAProxy Ingress controller] ( https://github.com/jcmoraisjr/haproxy-ingress )
18
18
* [ Linkerd] ( https://linkerd.io/config/0.9.1/linkerd/index.html#ingress-identifier )
19
- * [ traefik] ( https://docs .traefik.io/configuration/backends /kubernetes/ )
19
+ * [ traefik] ( https://doc .traefik.io/traefik/providers /kubernetes-ingress / )
20
20
* [ AWS Application Load Balancer Ingress Controller] ( https://github.com/coreos/alb-ingress-controller )
21
21
* [ kube-ingress-aws-controller] ( https://github.com/zalando-incubator/kube-ingress-aws-controller )
22
22
* [ Voyager: HAProxy Ingress Controller] ( https://github.com/appscode/voyager )
23
23
24
24
## Ingress 使用方法
25
25
26
26
具体 Ingress 的使用方法可以参考 [ 这里] ( ../../concepts/objects/ingress.md ) 。
27
-
Original file line number Diff line number Diff line change 14
14
## 支持 Network Policy 的网络插件
15
15
16
16
* [ Calico] ( https://www.projectcalico.org/ )
17
+ * [ Cilium] ( https://cilium.io/ )
17
18
* [ Romana] ( https://github.com/romana/romana )
18
19
* [ Weave Net] ( https://www.weave.works/ )
19
- * [ Trireme] ( https://github.com/aporeto-inc/trireme-kubernetes )
20
- * [ OpenContrail] ( http://www.opencontrail.org/ )
21
20
22
21
## Network Policy 使用方法
23
22
24
23
具体 Network Policy 的使用方法可以参考 [ 这里] ( ../concepts/objects/network-policy.md ) 。
25
-
Original file line number Diff line number Diff line change @@ -129,7 +129,7 @@ kubectl apply -f http://docs.projectcalico.org/v2.1/getting-started/kubernetes/i
129
129
130
130
# # [OVN](ovn-kubernetes.md)
131
131
132
- [OVN \ ( Open Virtual Network\ ) ](http ://openvswitch. org/support/dist-docs/ovn-architecture.7.html ) 是 OVS 提供的原生虚拟化网络方案,旨在解决传统 SDN 架构(比如 Neutron DVR)的性能问题。
132
+ [OVN (Open Virtual Network)](https ://www.ovn. org/en/ ) 是 OVS 提供的原生虚拟化网络方案,旨在解决传统 SDN 架构(比如 Neutron DVR)的性能问题。
133
133
134
134
OVN 为 Kubernetes 提供了两种网络方案:
135
135
@@ -236,4 +236,3 @@ kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/m
236
236
kubectl -n kube-system delete ds kube-proxy
237
237
docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables
238
238
` ` `
239
-
You can’t perform that action at this time.
0 commit comments