Provisioning SAP HANA Cloud databases from Kyma and Kubernetes 3: Kubernetes

This is the third of a three-part series on provisioning HANA Cloud databases from Kyma and Kubernetes environments. Part One introduced the concepts, Part Two showed how to provision a HANA Cloud database from the Kyma environment, and this part shows how to do the same thing from a Kubernetes cluster outside SAP BTP. It has many similarities to Part Two, but adds the setup steps required to connect your Kubernetes cluster to a BTP subaccount.

This post walks through the process using a local Ubuntu Linux machine with the minikube tool available to create a local cluster. I hope the process is reasonably transferable to real Kubernetes environments. It assumes you have admin privileges on a BTP subaccount.

In a later step you will install the SAP BTP Service Operator (BTPSO) into your Kubernetes cluster. For information about the BTPSO see its GitHub page or the product documentation. The BTPSO links Kubernetes to an SAP BTP subaccount by making requests to a dedicated Service Manager instance. So, you need to set up a service manager instance in your subaccount for it to talk to.

You can create a service manager instance from the BTP Cockpit: go to the subaccount, open the Service Marketplace, and find the Service Manager tile. Then click Create, choose the plan “service-operator-access”, choose the runtime environment “Other” and give it a convenient name. You can leave the JSON page blank.


Create a Service Manager instance

The next step is to create a service binding to this instance. The service binding contains credentials and URL that you will supply when installing the BTPSO into your kubernetes cluster. Click the instance in the BTP Cockpit Services and Instance view, and you can create a service binding from there.

Alternatively, you can create the instance and the binding using the btp CLI. Once you have logged in, and used btp target to set the target to this subaccount, create an instance as follows. The sample uses the bash script idiom that x=$(cmd) assigns the output of command cmd to the variable x. These variables will be used when installing BTPSO into your kubernetes cluster:

> service_plan=$(btp --format=json get services/plan --name service-operator-access --offering-name service-manager | jq -r ".id")
> btp create services/instance --plan ${service_plan} --name service-manager-for-k8s
> btp create services/binding --name demo-binding --instance-name service-manager-for-k8s
> # get the clientid and clientserver credentials, using "jq" to parse the JSON description
> clientid=$(btp --format=json get services/binding --name demo-binding | jq -r ".credentials.clientid")
> sm_url=$(btp --format=json get services/binding --name demo-binding | jq -r ".credentials.sm_url")
> url=$(btp --format=json get services/binding --name demo-binding | jq -r ".credentials.url")

You should now see the Service Manager instance and binding in BTP Cockpit.

To set up a local Kubernetes cluster, you first need docker running.

> sudo service docker start

Then you can create and start a minimal local Kubernetes cluster. Obviously, if you are already working with a cluster created in some other way, you can use that instead.

> minikube start

That takes a minute or two to start. You can check when it is ready with kubectl.

> kubectl cluster-info

You may want to install the Kubernetes dashboard to give you a graphical view into your cluster. Not that there is much in there of course 🙂

> minikube addons enable metric-server 
> minikube dashboard

You will also need cert-manager, to hold the connection information from your cluster to BTP and other private information, which is stores as Kubernetes secrets. Here is how to install it as of the time of writing: the best version number to use may change.

> kubectl apply -f

This also takes a minute or two to be available. Cert Manager comes with its own command-line interface. You don’t really need it for this walkthrough, but if you have it you can confirm availability with this command:

> cmctl check api
The cert-manager is ready

Now you can install the SAP BTP Service Operator. The GitHub pages use helm instead of kubectl, and I’ll do that too, although it may be bad practice to mix the two. The variables from the step above are used to define the Service Manager credentials, and the appropriate version number may have changed by the time you read this.

helm upgrade --install sap-btp-operator \
--create-namespace \
--namespace=sap-btp-operator \
--set manager.secret.clientid="${clientid}" \
--set manager.secret.clientsecret="${clientsecret}" \
--set manager.secret.sm_url="${sm_url}" \
--set manager.secret.tokenurl="${url}"

BTPSO creates its own namespace (sap-btp-operator) and you can check that it is installed by listing the pods in the namespace and confirming that their status is “Running”:

> kubectl get pods -n sap-btp-operator
NAME                                                 READY STATUS  RESTARTS AGE
sap-btp-operator-controller-manager-7dbd68cf4f-cmtbq 2/2   Running 0        32s
sap-btp-operator-controller-manager-7dbd68cf4f-pct7r 2/2   Running 0        32s

You are now ready to provision BTP services in your subaccount, from Kubernetes. The procedure is exactly the same as described in Part 2 for Kyma (but now with the KUBECONFIG pointing at your local kubernetes cluster), but it is reproduced here for convenience.

# Create a namespace for HANA Cloud service instances
> kubectl create namespace hana-cloud
namespace/hana-cloud created
> kubectl apply -f hanadb-instance.yaml --namespace hana-cloud created

> kubectl get serviceinstances --namespace hana-cloud
hana-instance-tom hana-cloud hana                          CreateInProgress False 66s

Whereas creating many BTP services simply involves a new set of metadata, creating a HANA Cloud database takes some minutes, as it involves spinning up a container with tens of GB of memory and hundreds of GB of storage. The CreateInProgress status indicator changes to Created when it is ready.

Here is a sample hanadb-instance.yaml file that defines a minimal set of parameters for a HANA instance.

kind: ServiceInstance
  name: hana-instance-tom
  serviceOfferingName: hana-cloud
  servicePlanName: hana
  externalName: hana-instance-tom-ex
    memory: 30
    vcpu: 2
    generateSystemPassword: true

The process for getting the generated system password is listed in Part 2. You now have a HANA Cloud database instance provisioned from your local kubernetes cluster, as part of managing the resources any applications running in that cluster needs.


Source link

Be the first to comment

Leave a Reply

Your email address will not be published.