Article Details

Tencent Cloud USD Recharge Tencent Cloud Kubernetes service TKE guide

Tencent Cloud2026-04-30 15:22:49MaxCloud

Before We Begin: What Is TKE (and Why You Should Care)

If you’ve ever tried to run containers in production, you already know the truth: containers are easy; the rest of life is not. Kubernetes exists to stop you from manually babysitting servers, SSH-ing into machines like it’s 2013, and praying that scaling doesn’t break everything you built during the weekend you definitely didn’t mean to spend.

Tencent Cloud’s Kubernetes service, also known as TKE (Tencent Kubernetes Engine), is basically Kubernetes with training wheels that still let you go fast. You get managed control plane features, node management, and a smoother path to deploying real workloads without assembling an entire “Cluster Engineering Experience” from scratch.

This guide aims to be practical: you’ll build a working mental model, set up a cluster, deploy an example application, and learn how to troubleshoot the classic issues. If you follow along, you should end up with a Kubernetes environment that’s ready to run your workloads and not ready to shame you during a demo.

Quick Checklist: What You Need Before Creating a Cluster

Let’s gather the stuff you’ll need, like ingredients before cooking. Not “ingredients you might find in the pantry” — actual ingredients.

  • Access to Tencent Cloud (obviously).
  • A basic understanding of containers and Kubernetes concepts (if you don’t, that’s okay; we’ll keep it friendly).
  • A decision on region (because resources live somewhere).
  • Network preferences: do you want public access or private only?
  • Node type/size estimates (you don’t need perfect accuracy, but guessing wildly is how you get 3 a.m. alerts).
  • Tencent Cloud USD Recharge Willingness to learn at least one command: kubectl. It won’t hurt you. Much.

Optional but strongly recommended:

  • Familiarity with YAML (Kubernetes likes YAML the way cats like knocking things off tables).
  • An understanding of container images and registries.
  • Basic security hygiene knowledge: IAM roles, access control, and least privilege (yes, we’ll mention it again later because it matters).

Planning Your TKE Cluster: The Decisions That Matter

Cluster creation is where good ideas go to become real systems. But before you click “Create,” make a few key decisions. This is less about being fancy and more about avoiding “oops” later.

Choose Cluster Type: Public, Private, or Hybrid-ish

Most people want to access Kubernetes. That usually means some public connectivity, at least for the API endpoint and possibly for services. But sometimes you want to lock things down and expose only what’s necessary.

Here’s the practical view:

  • Public endpoint: easier management, but more attention required for security.
  • Private endpoint: safer by default, but your administration tooling may need network connectivity (VPN, bastion host, or private network access).
  • Hybrid approach: common in real environments; expose ingress for specific services while keeping internal traffic private.

No matter what you pick, plan the security story. Kubernetes is powerful, and power requires boundaries.

Network Mode and CNI: Where Pod IPs Live Their Best Lives

Kubernetes networking can be surprisingly emotional. Fortunately, TKE provides managed options. You’ll still need to understand the basics: pod-to-pod networking, service discovery, and ingress routing.

When you create your cluster, you’ll likely be asked about VPC/VNet choices and network settings. The core idea is:

  • Nodes belong to a VPC (Virtual Private Cloud) network.
  • Pods need routable IP addresses within that networking model.
  • Services provide stable virtual IPs and DNS names.

If something goes wrong later (pods can’t talk to each other, services don’t resolve, ingress doesn’t reach backends), the networking decisions you made here are often the reason.

Node Configuration: The “How Big Is Big?” Problem

Nodes are where your workloads run. Think of nodes as apartment buildings, and pods as your residents. If you underbuild, people keep moving in and you run out of rooms. If you overbuild, you pay rent for empty apartments and wonder why your budget is crying.

When deciding node size and count:

  • Start small for learning environments (you can scale later).
  • For production, plan for growth and consider autoscaling.
  • Consider resource requests/limits to avoid noisy-neighbor chaos.

If you have no idea where to start: pick a modest instance size, deploy one or two sample workloads, and confirm that your scaling behavior is sane. Then adjust.

Creating Your TKE Cluster: Step-by-Step

Now we get to the fun part: creating the cluster. The exact button names may change slightly over time, because cloud consoles evolve like software plants: slowly, unpredictably, and with occasional surprises.

The general flow is consistent:

  1. Log in to Tencent Cloud Console.
  2. Find the Kubernetes service section (TKE).
  3. Click Create Cluster.
  4. Select region and cluster options.
  5. Configure networking (VPC/VNet, CIDR ranges if applicable).
  6. Choose node groups (instance types, counts, autoscaling if needed).
  7. Tencent Cloud USD Recharge Configure authentication and access (how you’ll retrieve cluster credentials).
  8. Confirm settings and create.
  9. Wait for the cluster to become ready.

As you configure, keep an eye on:

  • Node group setup: single group vs multiple groups.
  • Whether autoscaling is enabled (useful for real workloads).
  • How you’ll access the Kubernetes API endpoint securely.
  • Whether you want to install managed addons by default.

While the cluster is provisioning, don’t stare at the screen like it will speed up. It won’t. But you can prepare your workstation, because the next steps require kubectl and a kubeconfig file.

Getting Access: kubectl and kubeconfig

Kubernetes management is mostly kubectl. The kubeconfig file is your golden ticket. It contains cluster endpoint info, credentials, and context names.

Obtain the kubeconfig from Tencent Cloud

In the TKE console, there’s typically an option to “Connect” or “Download kubeconfig” (wording varies). Download it to your local machine.

Then place it where your kubectl expects it. Common options:

  • Set environment variable KUBECONFIG to point to the file.
  • Or merge it into your default ~/.kube/config.

For sanity, verify kubectl can talk to the cluster:

  • kubectl cluster-info
  • kubectl get nodes

If you see nodes listing, congratulations: you’ve successfully summoned the cluster. If you see errors, don’t panic. Most “can’t connect” issues are either network-related (endpoint unreachable) or credential-related (wrong context or expired permissions).

Understand kubectl contexts (so you don’t deploy to the wrong cluster)

It’s easy to connect to multiple clusters and then accidentally apply manifests to the wrong environment. That’s how you end up editing production with the confidence of someone who absolutely should not be editing production.

Check current context:

  • kubectl config current-context

List contexts:

  • kubectl config get-contexts

If needed, switch:

  • kubectl config use-context

Good habits are the real superpower. Kubernetes is not hard; careless people are.

Deploy a Sample Application (Your First Kubernetes Win)

Tencent Cloud USD Recharge Let’s deploy a small app and verify the full flow: deployment, service, and access. You’ll learn more here than by reading 200 pages of documentation that ends with “and then everything works.” Spoiler: “everything” rarely works on the first try, but we’ll fix it.

Create a Namespace (Optional but Recommended)

Namespaces keep things tidy. Even if you’re experimenting, it’s nice to have boundaries.

Create one:

  • kubectl create namespace demo

Then use it for all subsequent commands:

  • kubectl -n demo get pods

Deploy a simple Deployment

Create a Deployment manifest. Conceptually, a Deployment manages a set of pods. Here’s a simple example: a web app that serves content.

Create a file like demo-deployment.yaml with content along these lines:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-deployment
  namespace: demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web
  template:
    metadata:
      labels:
        app: web
    spec:
      containers:
      - name: web
        image: nginx:1.25
        ports:
        - containerPort: 80

Tencent Cloud USD Recharge Apply it:

  • kubectl apply -f demo-deployment.yaml

Check rollout:

  • kubectl -n demo get deployments
  • kubectl -n demo get pods -l app=web
  • kubectl -n demo describe pod

If pods aren’t running, use kubectl describe and kubectl logs. Pods failing to start are common at first due to image pull issues, missing permissions for pulling from private registries, or resource constraints.

Create a Service to expose the pods internally

Pods have ephemeral IPs. Services provide stable networking. Create a ClusterIP service first (internal access inside the cluster).

Create a file demo-service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  namespace: demo
spec:
  type: ClusterIP
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 80

Apply it:

  • kubectl apply -f demo-service.yaml

Verify:

  • kubectl -n demo get svc

At this point, the app should be accessible from other pods in the cluster using the service DNS name:

  • web-service.demo.svc.cluster.local

Test connectivity with a temporary debug pod

To confirm the service works, launch a temporary pod with a networking tool (like busybox or curl image). Example:

apiVersion: v1
kind: Pod
metadata:
  name: curl-test
  namespace: demo
spec:
  containers:
  - name: curl
    image: curlimages/curl:8.8.0
    command: ["sh", "-c", "sleep 3600"]

Apply it, then exec into it and curl the service:

  • kubectl -n demo exec -it curl-test -- sh
  • curl http://web-service:80

Exit the pod when done (delete it afterward to avoid clutter):

  • kubectl -n demo delete pod curl-test

Success. You’ve deployed and verified a working internal service. The cluster is no longer a mysterious box; it’s now a functional machine. Next: access from outside.

Expose the App: Ingress vs LoadBalancer (Your Choice, Your Consequences)

Kubernetes can expose services in multiple ways. Two common choices are:

  • Service type LoadBalancer (direct external IP/endpoint)
  • Ingress (HTTP routing rules managed by an ingress controller)

Ingress is usually preferred for web apps with multiple endpoints, because it centralizes routing and allows nicer domain-based configuration. But LoadBalancer is simpler for quick tests.

Option A: Use LoadBalancer (quick external access)

You can change the Service type to LoadBalancer:

apiVersion: v1
kind: Service
metadata:
  name: web-service
  namespace: demo
spec:
  type: LoadBalancer
  selector:
    app: web
  ports:
  - port: 80
    targetPort: 80

Apply the updated manifest (kubectl apply again):

  • kubectl apply -f demo-service.yaml

Then wait for the external IP/hostname:

  • kubectl -n demo get svc web-service

When the EXTERNAL-IP is ready, visit it in your browser (or use curl). If it never becomes ready, check events:

  • kubectl -n demo describe svc web-service

Common issues include load balancer quota limits, misconfigured networking permissions, or missing cloud integration components.

Option B: Use Ingress (HTTP routing like a grown-up)

Ingress requires an ingress controller. Many managed Kubernetes services provide one as an addon or allow installing one.

In TKE, you may find options to enable an ingress controller, often with managed support. If it’s available, enabling it in the TKE console is typically the smoothest route.

Once you have an ingress controller, create an Ingress manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-ingress
  namespace: demo
spec:
  rules:
  - host: web.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

Apply it:

  • kubectl apply -f demo-ingress.yaml

Notes:

  • You may need a host configured (or you can omit host depending on controller behavior).
  • DNS and certificates are separate steps if you want HTTPS.
  • Depending on your setup, you might use annotations for ingress class, load balancer integration, or SSL settings.

If the ingress isn’t working, check ingress controller logs and ingress events. Kubernetes is helpful, in the way a GPS is helpful when you’ve taken a wrong turn: it tells you you’re lost and provides a route to become found.

Tencent Cloud USD Recharge Essential Addons: Don’t Live Without These

Managed Kubernetes services often make it easy to enable addons. Addons are like the “quality of life” features you add after you’ve proven the cluster works but before you start caring about production readiness.

Metrics and Monitoring

Monitoring tells you what’s happening. Without it, you’re guessing. And guessing is fun for party games, not for production incidents.

Common monitoring targets include:

  • Node metrics (CPU/memory usage)
  • Pod metrics (requests, latency, restarts)
  • Cluster events (scheduling issues, scaling activity)

In TKE, enabling monitoring often integrates with a managed observability stack. After enabling, confirm that metrics appear and dashboards load.

Logging

Logging is your time machine. When something breaks, you want the evidence.

At minimum, you should be comfortable with:

  • kubectl logs -n demo
  • kubectl describe pod

Better logging means centralized log collection and search. Enable whatever your environment supports in the TKE console.

Ingress Controller and DNS Helper Components

If you use ingress, ensure the ingress controller is healthy. Also, if your environment uses DNS automation or certificate management, confirm those pieces are configured.

Security Basics: Kubernetes Is a Tool, Not a Teddy Bear

Security is one of those topics you can ignore right up until you can’t. TKE doesn’t remove the need for good practices; it helps you implement them cleanly.

Use RBAC (Role-Based Access Control)

RBAC controls what users and service accounts can do. The default setup may be adequate for learning, but for real environments, use least privilege.

Practical guidance:

  • Create service accounts for applications.
  • Grant permissions only needed for that app.
  • Avoid using cluster-admin for everything. It’s like giving everyone the keys to your house and telling them to “be careful.”

Secrets Handling

Kubernetes Secrets help store sensitive data, but you must still manage them properly. Options include:

  • Use Kubernetes Secrets for small-scale setups.
  • Consider encrypted secret storage and secret rotation in production.
  • Avoid putting secrets directly in your Deployment YAML files stored in version control.

Network Policies (When You Need Them)

By default, many Kubernetes clusters allow broad connectivity. NetworkPolicies can restrict traffic between namespaces/pods, but they require a compatible CNI setup and correct policy rules.

If your environment is strict, plan network policy early. Retroactively locking down can be time-consuming, like trying to reorganize your closet after you’ve been living in the laundry pile for months.

Common Troubleshooting: When “It Should Work” Actually Doesn’t

Here are the most frequent issues you’ll encounter when setting up or deploying workloads on a managed Kubernetes service. Think of this section as the “medicine cabinet” you wish you had opened sooner.

Pods Stuck in Pending

Symptoms: kubectl get pods shows Pending for longer than expected.

Check:

  • kubectl -n demo describe pod

Look for scheduling errors like:

  • Insufficient CPU/memory on nodes
  • Node selector/affinity mismatch
  • Volume provisioning issues (persistent volumes not available)

Fixes include adjusting resource requests/limits, scaling node count, or correcting selectors.

ImagePullBackOff or ErrImagePull

Tencent Cloud USD Recharge Symptoms: pods fail because they can’t download the container image.

Check:

  • kubectl -n demo describe pod
  • kubectl -n demo logs (if logs are available)

Common causes:

  • Wrong image name or tag
  • Private registry requires imagePullSecrets
  • Network egress restrictions

Solution: fix image reference and ensure credentials are set up for private registries.

Service Not Reachable

Symptoms: ClusterIP service exists, but curl returns nothing or times out.

Check:

  • kubectl -n demo get endpoints web-service
  • kubectl -n demo describe svc web-service

If endpoints are empty, your Service selector may not match pod labels. Kubernetes will do exactly what you asked, not what you meant.

Ingress Returns 404 or 502

Symptoms:

  • 404 Not Found suggests ingress rules don’t match the request path/host.
  • 502 Bad Gateway often suggests backend connectivity issues between ingress controller and service pods.

Check:

  • Ingress resource events: kubectl describe ingress
  • Ingress controller logs
  • Service endpoints and pod readiness

Also verify that the ingress class annotation/spec matches the controller actually running.

Scaling Doesn’t Happen (or Happens Slowly)

Kubernetes scaling can be delayed due to:

  • Autoscaler settings (min/max, cooldown periods)
  • Tencent Cloud USD Recharge Resource requests preventing scale-up from satisfying pods
  • Node group constraints

Check autoscaler status and events. For production, you should test scaling behavior before you rely on it during high traffic moments.

Operational Good Habits: How Not to Create a Disaster

Once your cluster works, it’s tempting to stop thinking. That’s when clusters become haunted by “mysterious” failures.

Use Resource Requests and Limits

Always specify resource requests and limits for containers in meaningful environments.

  • Requests influence scheduling.
  • Limits prevent one container from consuming all resources and ruining everyone’s day.

Prefer Deployments, Not Manual Pods

Deployments (or StatefulSets for stateful apps) provide desired state control: rollouts, rollbacks, and replica management.

Manual pods are like hand-made paper airplanes: impressive for five minutes, terrible for reliability.

Rollout Strategies and Health Checks

Liveness and readiness probes improve reliability:

  • Tencent Cloud USD Recharge Readiness probes decide when a pod is ready to receive traffic.
  • Liveness probes restart unhealthy pods.

Without these, Kubernetes may route traffic to pods that aren’t ready, creating weird intermittent failures.

Production Considerations (When You’re Done Playing and Ready to Ship)

Let’s talk production. This part isn’t meant to scare you. It’s meant to make you the person who doesn’t get paged unexpectedly.

Availability: Multi-zone/HA Control Plane and Node Groups

In production, aim for high availability. Depending on TKE features, you can configure multiple zones or node distribution.

Ensure:

  • Node groups distribute across failure domains (if available)
  • Your workloads use replicas to tolerate node loss
  • Your ingress/load balancing setup can survive failures

Upgrade Strategy

Kubernetes upgrades can impact workloads. Always plan:

  • Test upgrades in staging
  • Watch compatibility notes (API deprecations, runtime changes)
  • Use rolling updates when deploying your own apps

Backups and Persistent Storage

If you have stateful workloads, persistence needs careful planning. Ensure you know:

  • Tencent Cloud USD Recharge Storage class behavior
  • Volume lifecycle (provisioning and reclaim policies)
  • Backup strategy for databases

Storage problems are rarely urgent… until they become urgent. Plan early.

Tencent Cloud USD Recharge A Tiny “Real World” Example Workflow

Let’s tie it together with a realistic sequence. Imagine you want to run a web service and expose it over HTTP.

  1. Create a TKE cluster in the chosen region with a VPC.
  2. Get kubeconfig and confirm kubectl works.
  3. Create a namespace (e.g., demo or app-prod).
  4. Deploy a web app Deployment with readiness/liveness probes.
  5. Create a ClusterIP Service.
  6. Enable an ingress controller (via console addon or installation).
  7. Create an Ingress rule pointing to the service.
  8. Verify routing: test host/path requests.
  9. Enable monitoring/logging so you can sleep.
  10. Harden security: RBAC, secrets, and network policy (if needed).

At each step, you validate with kubectl commands and a simple test. That’s the secret formula: build, verify, repeat. Kubernetes rewards people who pay attention.

Quick Reference: Useful kubectl Commands

Here’s a small toolkit you’ll use repeatedly.

  • List nodes: kubectl get nodes
  • List pods: kubectl get pods -n
  • Describe pod: kubectl describe pod -n
  • View logs: kubectl logs -n
  • Check deployments: kubectl get deployments -n
  • Check services: kubectl get svc -n
  • Describe service: kubectl describe svc -n
  • List endpoints: kubectl get endpoints -n
  • Apply manifests: kubectl apply -f
  • Delete resources: kubectl delete -f

When in doubt, kubectl describe is your detective. It tells you what happened and why Kubernetes is being dramatic.

Final Thoughts: You Now Speak Kubernetes (With a Slight Accent)

Tencent Cloud USD Recharge By following this guide, you should have a functional understanding of how to set up and use Tencent Cloud Kubernetes service (TKE). You learned how to plan a cluster, create it, connect with kubectl, deploy an application, and expose it through services and ingress. More importantly, you got a framework for troubleshooting — the part people skip until their pager starts singing.

Remember: the first cluster is rarely perfect. Even experts sometimes misconfigure a selector, forget an addon, or pick the wrong context. The goal is not to avoid errors entirely. The goal is to diagnose them quickly and keep moving.

Go forth and deploy something useful. And if the cluster ever behaves oddly, just tell yourself: “It’s not broken; it’s teaching.” Kubernetes always has a lesson plan. Sometimes it’s just printed in YAML.

TelegramContact Us
CS ID
@cloudcup
TelegramSupport
CS ID
@yanhuacloud