在k8s上架设ELK教学

【YC的迷路青春】

版本要从头到尾相同ELK都是 这边用7.12.0

原则上就是新增一大堆的yaml档案
应该只需要一股脑先新增yaml档案就对了,连动应该有写好,这边都还没有涉猎到丢log所以按理来说应该是大家都会长一样的,公用版本级别的文章(?),我觉得应该要出一个这样的文章吧,或许是我自己没找到。

我自己觉得有这篇文章的内容。架一个最基本的ELK上k8s应该是相对轻鬆很多很多。

如果按照上面的指令一股脑输入进去 却有任何error,请回报给我,这样我也可以有修正的机会,我尽量看到就回,谢谢。

一定有不少文章可以直接帮你架好 但是如果跟着这样一个一个架 应该会比较容易理解大家各自在干嘛。

一·Elasticsearch

新增这三个yaml档案
1.elasticsearch的设定档案
2.Deployment
3.service
这边先用Deployment比较好盖 比较好理解 弄完之后在改成statefulSet

kind: ConfigMapapiVersion: v1metadata:  name: elasticsearch-config-ycdata:  elasticsearch.yml: |    cluster.name: "docker-cluster"     network.host: 0.0.0.0    xpack.license.self_generated.type: trial     xpack.monitoring.collection.enabled: true
kind: DeploymentapiVersion: apps/v1metadata:  name: yc-elasticsearchspec:  replicas: 1  selector:    matchLabels:      app: yc-elasticsearch  template:    metadata:      labels:        app: yc-elasticsearch    spec:      volumes:        - name: config          configMap:            name: elasticsearch-config-yc            defaultMode: 420      initContainers:        - name: increase-vm-max-map          image: busybox          command:            - sysctl            - '-w'            - vm.max_map_count=262144          securityContext:            privileged: true      containers:        - name: yc-elasticsearch          image: 'docker.elastic.co/elasticsearch/elasticsearch:7.12.0'          ports:            - containerPort: 9200              protocol: TCP            - containerPort: 9300              protocol: TCP          env:            - name: ES_JAVA_OPTS              value: '-Xms512m -Xmx512m'            - name: discovery.type              value: single-node          volumeMounts:            - name: config              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml              subPath: elasticsearch.yml
kind: ServiceapiVersion: v1metadata:  name: yc-elasticsearchspec:  ports:    - name: yc-elasticsearch      protocol: TCP      port: 80      targetPort: 9200  selector:    app: yc-elasticsearch  type: ClusterIP  sessionAffinity: None

这边可以打curl serviceIP 看如果有tagline" : "You Know, for Search 那就对了
elasticsearch不与其他人挂勾 所以通常ELK从E开始写

可以下kubectl logs "pod name" 看是否开启成功

二·logstash

下一个 logstash

/usr/share/logstash/pipeline 的设定档案/usr/share/logstash/config/logstash.yml 的设定档案DeploymentService
kind: ConfigMapapiVersion: v1metadata:  name: logstash-config-yc  namespace: defaultdata:  logstash.yml: >    http.host: "0.0.0.0"    xpack.monitoring.elasticsearch.hosts: [    "http://yc-elasticsearch.default.svc.cluster.local:80" ]

这地方之后如果要丢log了 是改这边

kind: ConfigMapapiVersion: v1metadata:  name: logstash-pipelines-ycdata:  logstash.conf: |    input {      beats {        port => 5044      }    }    output {      elasticsearch {        hosts => ["http://yc-elasticsearch.default.svc.cluster.local:80"]        index => "log_test"      }    }

这地方之后如果要丢log了 这边也可能会需要加几个volumes。

kind: DeploymentapiVersion: apps/v1metadata:  name: yc-logstashspec:  replicas: 1  selector:    matchLabels:      app: yc-logstash  template:    metadata:      labels:        app: yc-logstash    spec:      volumes:        - name: config          configMap:            name: logstash-config-yc            defaultMode: 420        - name: pipelines          configMap:            name: logstash-pipelines-yc            defaultMode: 420      containers:        - name: yc-logstash          image: 'docker.elastic.co/logstash/logstash:7.12.0'          ports:            - containerPort: 5044              protocol: TCP            - containerPort: 5000              protocol: TCP            - containerPort: 5000              protocol: UDP            - containerPort: 9600              protocol: TCP          env:            - name: ELASTICSEARCH_HOST              value: 'http://yc-elasticsearch.default.svc.cluster.local'            - name: LS_JAVA_OPTS              value: '-Xms512m -Xmx512m'          volumeMounts:            - name: pipelines              mountPath: /usr/share/logstash/pipeline            - name: config              mountPath: /usr/share/logstash/config/logstash.yml              subPath: logstash.yml
kind: ServiceapiVersion: v1metadata:  name: yc-logstashspec:  ports:    - name: logstash      protocol: TCP      port: 80      targetPort: 9600    - name: filebeat      protocol: TCP      port: 5044      targetPort: 5044  selector:    app: yc-logstash  type: ClusterIP  sessionAffinity: None

三·Kibana

再来就是Kibana
1./usr/share/kibana/config/kibana.yml 的设定档案
2.Deployment
3.service

kind: ConfigMapapiVersion: v1metadata:  name: kibana-config-ycdata:  kibana.yml: >    server.name: kibana    server.host: 0.0.0.0    elasticsearch.hosts: [    "http://yc-elasticsearch.default.svc.cluster.local:80" ]    monitoring.ui.container.elasticsearch.enabled: true
kind: DeploymentapiVersion: apps/v1metadata:  name: yc-kibanaspec:  replicas: 1  selector:    matchLabels:      component: yc-kibana  template:    metadata:      labels:        component: yc-kibana    spec:      volumes:        - name: config          configMap:            name: kibana-config-yc            defaultMode: 420      containers:        - name: elk-kibana          image: 'docker.elastic.co/kibana/kibana:7.12.0'          ports:            - name: yc-kibana              containerPort: 5601              protocol: TCP          volumeMounts:            - name: config              mountPath: /usr/share/kibana/config/kibana.yml              subPath: kibana.yml
kind: ServiceapiVersion: v1metadata:  name: yc-kibanaspec:  ports:    - name: yc-kibana      protocol: TCP      port: 80      targetPort: 5601  selector:    component: yc-kibana  type: LoadBalancer

到这边就算是架完ELK了,再来就是汇入log

汇入log这边介绍两个方法
一个是宣告储存体,在上服务的时候透过volumes把log写在储存体里面,然后我们去读取
(在logstash那边也写一个volumes 这样就可以做简单的测试了)
另一个是filebeat,这边下集待续。

应该会满有帮助的。

我们的作法 测试volumes 我们必须在储存体盖一个 永久性磁碟 去连结档案用共(filesharing)的部分。

kind: PersistentVolumeClaimapiVersion: v1metadata:  name: log-azurefilespec:  accessModes:    - ReadWriteMany  resources:    requests:      storage: 2Gi  volumeName: log-azurefile-yc  storageClassName: ''  volumeMode: Filesystemstatus:  phase: Bound  accessModes:    - ReadWriteMany  capacity:    storage: 2Gi
kind: PersistentVolumeapiVersion: v1metadata:  name: log-azurefilespec:  capacity:    storage: 2Gi  azureFile:    secretName: elk-secret    shareName: yc/logs    secretNamespace: null  accessModes:    - ReadWriteMany  claimRef:    kind: PersistentVolumeClaim    namespace: default    name: log-azurefile  mountOptions:    - dir_mode=0777    - file_mode=0777    - uid=1000    - gid=1000    - mfsymlinks    - nobrl  volumeMode: Filesystem

然后需要一个可以连到档案共用的秘密

kind: SecretapiVersion: v1metadata:  name: elk-secret  namespace: defaultdata:  azurestorageaccountkey: xxxxxxx  azurestorageaccountname: xxxxxxxxxtype: Opaque

这样就可以连到在档案共共的yc/logs了
log-azurefile

然后我们回去logstash那边 补上

volumes:  - name: volume-log          persistentVolumeClaim:            claimName: log-azurefile

volumeMounts:    - name: volume-sso-log              mountPath: /usr/local/tomcat/logs

这样就可以在logstash的deployment 的 /usr/local/tomcat/logs 地方跟ys/log 档案共用互相打通

你在ys/log新增文件 同时也可以在logstash的/usr/local/tomcat/logs也有相同的文件喔,反之亦然。

在logstash-pipelines里面+上file

input {  beats {    port => 5044  }  file{    path => "/usr/local/tomcat/logs/*.log"  }}

先去kubectl exec -it logstash-xxxx bash
cd /usr/local/tomcat/logs/

看一下档案有没有打通
然后新增.log档案 就可以看kibana是否有东西进去了


关于作者: 网站小编

码农网专注IT技术教程资源分享平台,学习资源下载网站,58码农网包含计算机技术、网站程序源码下载、编程技术论坛、互联网资源下载等产品服务,提供原创、优质、完整内容的专业码农交流分享平台。

热门文章